This week, my selected readings on the topic of new technologies (and how they shape the geopolitical environment) focus on the challenges of the financial sector in utilizing them, but also on the steps the national states have taken in addressing those potential problems to come from the use of AI. Preparing for the Geopolitics for Business workshop on Sept. 21, where we’ll discuss about the human factor and how we really connect nowadays. So, here is the summary of what I’ve read – and my thoughts (in short).
Fintech negative effects?
The second annual survey released by the Institute of International Finance demonstrates that the usage of machine learning in the financial industry has grown significantly. If, in 2018 only 58% of surveyed firms reported pilots or production use of machine learning, in 2019 their number grew: 85% of surveyed organizations reported production or pilot projects. However, the challenges these organizations face have shifted from not having data to having too much data and not too much capability to make use of it. The common problem of analysis in the 21st century is now getting recognized by the banking sector as well – the consequences, however, are debatable (and largely unknown – yet). Read the full report here.
A short report (or article) on what’s good and what’s not so good about banks’ usage of machine learning and artificial intelligence. Mike Telang, executive vice president and head of enterprise architecture at U.S. bank Wells Fargo is heavily quoted as he lays out the challenges the banks face as they implement fintech solutions to their business – in other words, a case study. Read it in full here.
Regulating new tech?
Besides the application of new technologies, the question of regulation when it comes to artificial intelligence is still not settled. Discussions on government policies remain general – while there is legislation tackling with these applications, it is mostly related to telecoms regulations. There is no legal framework set for the employment of artificial intelligence in strategic sectors (in the military, but also other sectors like the energy or financial sectors – as machine learning tools switch to more advanced tech). But there starts to be allusion of such regulations coming in, soon: talk on the “ethics” of AI is… intensifying.
G7 discussed AI. France and Canada joined forces to establish an international committee to advise on the ethics of artificial intelligence. “The panel’s broad ambition is to create an expert network that will advise governments on AI issues such as data privacy, public trust and human rights. Its members will include the research community, governments, industry and civil-society organizations.” This comes after the G20 official Principles were adopted in June, a document that has all signatories (China and the U.S. both) committing to a “human-centered” approach to AI. Read more about this here.
Ethicists are needed. The Department of Defence of the U.S. announced it is looking for one to help “accelerate DOD’s adoption and integration of artificial intelligence to achieve mission impact at scale.” The report announcing this also points out one of the challenges the U.S. is facing – „potential adversaries don’t share the same ethical values the U.S. does when it comes to collection or use of information. Artificial intelligence systems are as smart as the data they have access to, Shanahan said, and China and Russia don’t have the same restrictions the United States has on data collection.” Ethics, like perception, is in the mind of the beholder – and it’s also pretty fluid, generally speaking. Read more on the topic here.