Are we on the verge of an AI singularity? – Based on the meetup presentation by Tamás Nadrai
Artificial intelligence is now a recurring, everyday subject with many related exaggerations, which is perhaps why it’s so hard to tell exactly what stage of development this technology is currently at. In the following, we will outline some milestones that can help non-specialists understand the various achievements considered significant advancements and the stage of machine intelligence development.
When it comes to exaggeration, let us say that we are not on the verge of an AI singularity. This expression indicates a level of technology at which general machine intelligence becomes capable of self-development, allowing it to exceed human intelligence at once, thus attaining an inestimable explosion of innovation. General AI is a technology capable of offering problem-solving skills similar to those of human beings in all fields of life, not just in a specialised area. Because this remains a distant concept, the achievement of singularity falls into the realm of science fiction. However, it’s worth recognising that the development is progressing rapidly. Currently, all IT companies should consciously deal with the subject if they don’t want to be quickly left behind.
What makes us say that? One of the most apparent reasons for this is the sudden increase in data and computing capacity. One of the most reliable directions of the development of artificial intelligence is the neural networks approach, which models the brain’s neural network. Interestingly enough, the basis of this technology was established as early as in the 50s, achieving significant breakthroughs in recent years as there is a difference between whether a network consists of 200 or 1.6 trillion neurons. The robust increase in computing capacity is paralleled by the greater complexity of the neural network, resulting in increasingly advanced intelligence. The networks operated by supercomputers, consisting of thousands of billions of neurons, are already more complex than the neural networks of certain animals – and we are on the brink of developing quantum computers, the appearance of which will instantly turn all our currently used technology into museum relics.
The following is a summary of five noteworthy AI-related developments that can be considered stunning achievements even at today’s level of development.
- I. The typing machine: AI text generation
The OpenAI artificial intelligence research laboratory –founded by the likes of Elon Musk – created a language model, GPT3 (Generative Pre-trained Transformer 3), which achieved rather startling levels of English language usage. The system received input from a database of 2.95 billion websites, recognising their internal connections and independently constructing the rules of English language usage. Based on this process, GPT3 became capable of producing texts that are practically impossible to be differentiated from texts written by human beings. The OpenAI system uses 175 billion parameters to operate its language model, yet Google’s similar – yet not public – model is rumoured to be based on 1600 billion, or 1.6 trillion parameters. By comparison, the human brain has about 100 trillion neuron connections.
Another interesting fact about GPT3 is that it has become capable of writing code by studying web-based texts. Following the initial successes, the developers kept on training the model, this time with materials on GitHub, thanks to which the system was capable of producing perfectly functional web code.
- II. Image-ination
The Nvidia StyleGAN 3 AI-based image generator has reached such a high level of image recognition and imaging that it has become capable of creating images of human faces that can easily mislead the average user – and which never truly existed at all. What makes the system genuinely clever is that instead of combining the elements of various photos and using them to create a seemingly credible montage of images, the learning algorithm has practically become capable of interpreting the concept of the human face based on the countless portraits entered into its system. What does that mean? Simply put, despite all the occasional differences, StyleGAN3 “understands” what an eye or a chin or mouth looks like, etc., and can create them practically from scratch based on specific parameters.
The system can apply abstract categories such as sex, age or race, i.e. it knows what kind of changes these parameters can bring to the human face. All of this requires abstract, layered knowledge that includes deep and thorough definitions of the aforementioned visual motifs. One of the best demonstrations of the skills of StyleGAN3 is when the system is tasked with transmuting a face into an entirely different face. In such cases, it demonstrates the process of how an image is transformed from the initial state into a changed image: instead of the first face simply transitioning into the second image through the rearrangement of pixels, the morphology of the faces starts approximating one another, while continuously showing an image that could depict the face of a real human being throughout all phases of the process.
- III. Is the funny guy a mushroom?
Facebook’s self-developed chatbot was trained with 1.5 billion Reddit comments before perfecting it with a unique dataset created specifically for its needs. As a result of this last step, the chatbot gained personality traits, had complex knowledge of the world, was capable of displaying empathy and became suitable for smoothly and unnoticeably linking and uniting these first three attributes.
The chatbot was so successful that as part of a spontaneous discussion, it could understand and even explain puns – which is rather impressive as understanding jokes requires a relatively high level of abstraction. Specifically, a tester once asked the following from the chatbot: Why did the mushroom go to the party? At first, the chatbot didn’t understand the question, but the tester indicated that the question was a part of a joke, and the answer was: “because he’s a fun guy.” The AI then asked them to explain the joke, so the tester explained that the essence of it is that the pronunciation of fungi is practically identical to ‘fun guy’ and this homonym results in an absurd misunderstanding. Subsequently, the chatbot became capable of interpreting the joke and even explaining its logic with its own expressions.
- IV. The Go master
Since the beginning of AI development, one of the critical tests was to see when artificial intelligence could overcome a human being in chess. This question has long been settled, and, in 1997, the Deep Blue system even bested the reigning chess world champion at the time, Garry Kasparov. However, specific experts feel that the rules of chess favour AI as the game allows utilising heuristics that machines could easily benefit from. However, the game Go is another story, as the number of possible moves is a whole different order of magnitude compared to chess.
In 2016, the Deepmind Alphago AI model “challenged” one of the world’s most renowned Go players, who spoke rather confidently about his chances before the game – yet was swimming in tears by the end of the game after the machine won 4 to 1. There was an exciting moment in the game when the AI made a move that Go experts consider to be a faulty, poor move – yet still, this led to its victory. This is particularly interesting as it shows that the system trained on a massive number of human Go matches concluded, based on its calculations, which transcended previous human knowledge and urged the machine to carry out a seemingly poor yet ultimately triumphant, innovative move. The icing on the cake is that the same model later adopted a new, more advanced artificial intelligence, which vanquished Alphago 100 to 0.
- V. Who’s the finder?
Another system of the OpenAI company mentioned in the first item above was tasked with competing with other machine-based opponents in a simple computer game. The game practically follows the logic of hide-and-seek, yet it is played by teams and allows using various landscape features. During the hide-and-seek, not only did the AI figure out that the members of the group they are controlling must cooperate to defeat their opponents, but it was also capable of using the devices at its disposal in a rather clever way: it was capable of building barricades and cover for its team and realised that if the other team regularly uses a particular feature to overcome the AI-coordinated squad, then the first thing it should do is get a hold of this feature before its opponent.
However, even more exciting tactics surfaced at a certain point in the game: the software creating the space for the virtual hide-and-seek wasn’t written well enough; therefore, on one occasion, the AI noticed that it was capable of moving certain features by standing on the object, i.e. surfing on the feature. This fault in the software provided a considerable advantage against its opponent, so after a while, the AI started developing its strategy based on this software failure. This phenomenon shows that rules are entirely different for machine-based intelligence than for human beings. Suppose the rules of a game aren’t defined precisely enough. In that case, the AI considers all solutions that lead to success as acceptable – regardless of whether or not these violate the evident, ethical rules of a game (for human beings).
Naturally, the above-listed examples cannot provide a comprehensive overview of the current status of AI developments. Yet, they indicate that learning algorithms are already capable of achievements that one would not expect to see from machine-based intelligence. Naturally, all cited examples concerned a technology optimised for a specific task. These developments are far from being consolidated into a single system that models general human intelligence. However, the dynamics of evolution suggest that AI technology is reaching new milestones every day; therefore, it is the vital interest of IT companies, at the very least, to monitor this process.
Are we on the verge of an AI singularity? – Based on the meetup presentation by Tamás Nadrai