The following is a guest post from Toby Olshanetsky, the cofounder and CEO of prooV, the first proof-of-concept (PoC)-as-a-Service platform, which helps enterprises find, test-drive and implement new technologies. He has held senior roles and led several successful startups over the past 20 years, in technologies including cybersecurity, mobile development, e-commerce and online banking.

Toby Olshanetsky
Artificial Intelligence (AI) and machine learning are becoming ubiquitous. They are among the hottest technologies in the proof-of-concept phase, specifically in financial services — and Merrill Lynch predicts the global value for AI and robots will near $153 billion by 2020.
These technologies offer tremendous benefits by eliminating human error, but there are risks – especially when relying on bots to make human, split-second decisions. When given a moral, ethical or conflicting decision to make, can you truly trust a bot?
For the financial services industry, AI and machine learning technology offer evident benefits in helping to evaluate risk, detect fraud and execute trades in the markets. In November 2017, The Financial Stability Board (FSB) published a list of possible AI risks and benefits, and financial stability implications. Among its warnings, the FSB cautioned that the herd behavior caused by clone-like “robot traders” could amplify financial shocks and reduce stability.
Given the immaturity of AI, the question is: in our quest to leverage AI to be better, cheaper and faster, can it really be smarter? And, if AI adoption in financial services continues on its current trajectory what mechanisms need to be put in place to mitigate risk?
Grandma got run over by a self-driving car
While driverless cars are expected to revolutionize the automobile industry, would you trust that an autonomous car can avoid hitting a brick wall or running over a person who is crossing the street? Not surprisingly, 78 percent of respondents to an AAA survey said they would not want to ride in a self-driving car.
Trust in any new technology takes time – but is particularly critical for AI and machine learning, which is not capable of making a split-second life or death decision or to “make a call.” That was evidenced in a fatal crash by a truck driver in May 2016. The National Transportation Safety Board determined it was an overreliance on automation and lack of safeguards that led to the fatal crash.
The bottom line is that AI can only do what it’s programmed to do, or what it has learned. The machine simply doesn’t know what it doesn’t know, and without enough data at its disposal, we don’t always know how or why it’s making decisions at all.
AI and the changing face of finance
As machines have the ability to process information quicker than their human counterparts and with more accuracy, machine learning has the potential to impact the finance world more than almost any other industry. Some financial institutions have been investing in AI for years. Other firms are now beginning to catch up.
To gain a competitive edge, financial institutions are increasingly using AI in a range of applications to assess credit quality, to price and market insurance contracts and to automate client interaction. Institutions are optimizing scarce capital with AI and machine learning techniques, as well as back-testing models and analyzing the market impact of trading large positions. Meanwhile, hedge funds, broker-dealers and other firms are using it to find signals for higher uncorrelated returns and to optimize trade execution.
A good example is the increasing prevalence of robo-advisors. Instead of talking to a banker about your finances and financing options, a smart robot will be providing assistance. Robo-advisors are able to provide automated, algorithm-driven financial predictions with little to no human supervision. Their accurate assessments take only moments to predict, thanks to machine learning that is being continually improved over time, and they have the ability to factor in thousands of scenarios when making financial predictions.
Or, look directly at Wall Street. IBM’s Watson supercomputer has been hired to help run an exchange-traded fund (ETF) and pick stocks than can achieve better performance than the broad U.S. stock market index. IBM’s Watson AI will perform a fundamental analysis of U.S.-listed stocks and real estate investment trusts based on up to 10 years of historical data and then apply that analysis to recent economic and news data.
Building Trust: The Human Element
Ironically, AI has been largely driven by the need to reduce human error. However, the human element may be what’s needed to gain trust in AI. Good AI depends on good data.
Look at Nissan. It found a way to calm the nerves of people with self-driving cars. It introduced the human element in a command center, which would be livestreamed video footage from the car. This would determine the right action and direct the car where to go.
Similarly, there needs to be human oversight in the financial sector. The FSB report states that the predictable patterns resulting from automated trading strategies could be used by insiders or cybercriminals to manipulate market prices. The report also emphasized the risk of being dependent on a small number of third-party technology companies that were outside the remit of financial regulators. And, there could be questions over who would bear responsibility for failures in AI.
Regulations and RegTech to the rescue
Regulators will eventually play a major role with AI. Currently, there are no international regulatory standards, but the FSB has left the door open as to whether new rules are needed. The pace of technological progress adds an additional level of complexity when defining a long-lasting set of rules to regulate AI activity.
While we are currently in a political environment of deregulation, global regulations are still changing every 12 minutes with up to 185 new regulations every day – and it’s only a matter of time until regulators eventually catch-up with AI.
RegTech is emerging as a boon for start-ups looking to capitalize on the financial sector. The goal of RegTech, beyond digitizing regulation and compliance processes, is to ensure that another financial crisis does not occur. As regulations eventually catch up to what’s happening in AI and aim to provide a framework for compliance demands, reforms and legislations, the RegTech market is only beginning.
Bottom Line
The emergence of AI and machine learning in the financial industry presents a huge opportunity for entrepreneurs and startups with AI, machine learning and RegTech innovations.
To avoid risks to the financial sector based on bad or conflicting data, regulators are going to need to catch-up to the technology – and financial services companies are going to still need humans to ensure that there is a level of trust.
Toby Olshanetsky is the cofounder and CEO of prooV, the first proof-of-concept (PoC)-as-a-Service platform, which helps enterprises find, test-drive and implement new technologies. He has held senior roles and led several successful startups over the past 20 years, in technologies including cybersecurity, mobile development, e-commerce and online banking.
The post Navigating AI’s moral dilemma to avoid financial services risk appeared first on Bankless Times.