2020’s Biggest Stories in AI
2020 provided a glimpse of just how much AI is beginning to penetrate everyday life. It seems likely that in the next few years we’ll regularly (and unknowingly) see AI-generated text in our social media feeds, advertisements, and news outlets. The implications of AI being used in the real world raise important questions about the ethical use of AI as well.
So as we look forward to 2021, it is worth taking a moment to look back at the biggest stories in AI over the past year.
GPT-3: AI Generated Text
Perhaps the biggest splash of 2020 was made by OpenAI’s GPT-3 model. GPT-3 (Generative Pretrained Transformer 3) is an AI capable of understanding and generating text. The abilities of this AI are impressive — early users have coaxed the AI to answer trivia questions, create fiction and poetry, and generate simple webpages from written instructions. Perhaps most impressively, humans cannot distinguish between articles written by GPT-3 and those written by humans.
Although GPT-3 is not yet approaching the technological singularity, this model and others like it will prove incredibly useful in the coming years. Companies and individuals can request access to the model outputs through an API (currently in private beta testing). Microsoft now owns the license to GPT-3, and other groups are working to create similar results. I expect we’ll soon see a proliferation of new capabilities related to AI’s that understand language.
AlphaFold: Protein Folding
Outside of Natural Language Processing, 2020 also saw important progress in biotechnology. Starting early in the year, we got the rapid and timely advancement of mRNA vaccines. Throughout the year, clinical trials proved these to be highly effective. As the year came to a close, another bombshell — DeepMind’s AlphaFold appears to be a giant step forward, this time in the area of protein folding.
This fall, the latest version of AlphaFold competed against other state-of-the-art methods in a biennial protein folding prediction contest called The CASP Assessment. In this contest, algorithms were tasked with converting amino acid sequences into protein structures and were judged based on the fraction of amino acids positions the model predicts correctly within a certain margin. In the most challenging Free-Modeling category, AlphaFold was able to predict the structure of unseen proteins with a median score of 88.1. The next closest predictor in this year’s contest scored 32.4. This is an astonishing leap forward.
Going forward, scientists can use models like AlphaFold to accelerate their research on disease and genetics. Perhaps at the end of 2021, we’ll be celebrating the technology that work like this enabled.
Democratizing Deep Learning
As highlighted above, deep learning — the primary method underlying many state-of-the-art Ais — is proving useful in domains as disparate as biology and natural language. Efforts to make deep learning more accessible to domain experts and practitioners is accelerating the adoption of AI in many fields.
Anyone with an internet connection can now generate a realistic but completely fake photograph of a human face. Similar technology has already been used to create more realistic — and more difficult to detect — fake social media accounts in disinformation campaigns, including some leading up to the 2020 U.S. election. And OpenAI is planning to make the capabilities of GPT-3 available to vetted users through a comparatively easy-to-use API. There is genuine concern that as deep-learning-enabled technology becomes more accessible, it also becomes easier to weaponize.
But pairing AIs with human domain experts can also be leveraged for good. Domain experts can steer the AIs towards impactful, solvable problems and diagnose when the AIs are biased or have reached incorrect conclusions. The AIs provide the ability to rapidly process enormous volumes of data (sometimes with higher accuracy than humans), making analyses cheaper and faster, and unlocking insights that might otherwise be out of reach. User-friendly tools, APIs, and libraries facilitate the adoption of AI, especially in fields that can leverage already well-established techniques such as image classification.
One of the interesting consequences of AI and ML systems becoming more readily accessible has been the resulting shift of priorities in the field of AI Ethics.
What stands out about the field of AI Ethics in 2020 is not any single achievement or breakthrough, but rather the sheer amount of work that was done in re-orienting and focusing attention towards topics of immediate concern. These include questions ranging from how to deal with racial and gender biases in datasets to inequities resulting from low-paid gig work labeling the very data used to train algorithms.
Some of these issues are now being confronted because of increasing interaction with AI systems but the other driving factor has been a small but dedicated group of researchers, often from groups that are underrepresented in the broader AI community, who have not only been sounding the alarm about these ethical concerns but have also been pushing for increased diversity and representation in the field itself.
Despite all the progress that has been made so far, a large uphill battle remains. At the beginning of December, Google fired its ethical AI co-lead, Timnit Gebru. The news has been unsettling for the broader Ethical AI community, not only because Gebru was attempting to publish a research study on the environmental consequences of training large scale language models (core to Google’s business) and the issues regarding lack of diversity that have been exposed as a result of the review process, but also because the incident raises questions about how the academic research community should relate to industry.
Nevertheless, the accomplishments in this burgeoning field lay the groundwork in determining for whom and for what AI should get used.
Looking towards 2021
At the start of 2020, some researchers expressed concerns that AI research may soon be entering another winter, in which progress reaches a standstill and both interest and funding dry up. While the novelty and excitement surrounding deep learning may indeed be wearing off, it is certainly interesting to note that two of the more exciting breakthroughs in 2020 were GPT-3 and AlphaFold, both of which leveraged existing theoretical approaches, but greatly advanced the practical applications of AI algorithms in their respective domains. Moving forward, we suspect focus will shift towards making it possible to learn from smaller amounts of data, while improving generalizability and interpretability, all in service of making AI models more practicable.
Human domain experts will also continue to play an important, if different, role, as democratization efforts continue to push AI capabilities into new fields. As these changes continue to change the landscape in which AI is deployed, and the methods by which we interact with such systems, we’re also likely to see continued focus on pragmatic problems with real societal impacts, and continued discussions about the role of AI in society.
In any case, practical applications appear to have substantial room before they exhaust the available theoretical advances. And unlike prior decades, the penetration of AI into society and the promise of attainable pragmatic solutions seems likely to sustain AI progress for the foreseeable future.
About the Authors
Christopher Thissen, Senior Data Scientist at Vectra. At Vectra Chris harnesses the power of machine learning to detect malicious cyber behaviors. Before joining Vectra, Chris led several DARPA-funded machine learning research projects at Boston Fusion Corporation. Prior to Boston Fusion, he was a postdoctoral fellow at the Carnegie Institution for Science in Washington, D.C. Chris holds a B.S. from the University of Notre Dame and a Ph.D. from Yale University.
Ben Wiener, Data Scientist at Vectra. Ben has a PhD in physics and a variety of skills in related topics including computer modeling, optimization, machine learning, and robotics.
Sohrob Kazerounian, AI Research Lead at Vectra. Sohrob is highly experienced in artificial intelligence, deep learning, recurrent neural networks, and machine learning. Before joining Vectra, he did machine learning and artificial intelligence work for SportsManias. Prior to SportsManias, he was a postdoctoral research fellow at IDSIA. He received a B.S. in cognitive science as well as computer science and engineering from The University of Connecticut and a doctor of philosophy (Ph.D.) in cognitive neural systems from Boston University.
Sign up for the free insideBIGDATA newsletter.
Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1
Read More …