In recent years, advances in the field of artificial intelligence has led to better text and speech recognition of natural language and has seen digital assistants, also often called chatbots, becoming increasingly beneficial to businesses.
This kind of technology is nothing new, but has been growing in popularity since the 1990s. There is, however, a key difference. While these systems should support people in performing cognitive tasks and making decisions, in the past they were only capable of carrying out simple tasks using weak AI whereas now they have morphed into adaptive digital assistants.
Communication with those software-based digital assistants can either be in text format or using spoken language. We all know Siri, Alexa and Cortana from our personal lives, but now these systems are becoming increasingly widespread in businesses thanks to their increased performance capabilities.
Plenty of reason, then, to take a closer look at the potential application areas AI assistants could be used in. In order to determine specific uses in an enterprise context, it’s worth looking into the technology.
Digital assistants have six main characteristics:
It goes without saying that a machine needs to have an ounce of intelligence to master it all. Some of you will be wondering how these digital assistants (also known as Conversational User Interfaces (CUI)) actually work.
The system architecture is characterised by a front-end that represents the user interface and in the case of a chatbot, consists of text-based dialogue components. From here, user input is forwarded to the Conversational Intelligence component that handles analysis and the generation of answers.
This is also where basic language is processed based on natural language processing (NLP) technologies. NLP converts speech into a language a computer can understand so that it can perform tasks.
As a central system component, Conversational Intelligence processes information and synthesises language either based on simple keyword analysis, rule-based or through AI. If the system understands both the context and user input, possible answers can be generated by accessing the back-end, which is made up of interfaces to external applications and other data, assigning answers by comparing patterns.
So, let’s take a look at how it all works. Modern digital assistants use Natural Language Understanding (NLU) to categorise the user’s intention, Dialogue Management (DM) to determine the user’s intention and Natural Language Generation (NLG) to generate a natural language response—all using AI.
To overcome the limitations of rule-based technologies and pre-programmed systems, language processing is increasingly being automated using AI and machine learning (ML). By leveraging AI, assistants can analyse open-ended statements and use available data to generate suitable answers with the ultimate goal to have a naturally flowing conversation. These systems also translate natural language into structured data, but the difference is that the system automatically creates links between a range of pre-programmed data and dynamically generates content. Machine learning forms the foundation of AI-based chatbots.
This is because it enables IT systems to recognise patterns and regularities based on existing data sets and algorithms, and develop solutions. Artificial knowledge and intelligence gained from these experiences is defined as ML. Initially, the system is trained using a test data set with the system continuing to learn in the course of the operational phase in order to have better access to the available data. This means that such systems learn from every conversation and are therefore able to independently develop. With sufficient training, analyses and user feedback, extremely complex response patterns can be generated that enable free-flowing, natural communication. AI chatbots are, therefore, the most complex form of digital assistants, but this technology surely has its limits. To find out what they are, we have to take a closer look at AI in digital assistants and data processing.
IT techs have been trying to program a computer so that it can process problems independently since the 1950s. With new technologies and the continued development of AI especially in recent years, research has come closer to achieving this thanks to many advances—the widespread use of AI has become a global trend, among other things.
But what is AI? Over the years, there have been countless attempts to write a definition which differ depending on the field.
Due to the fact that there are various layers of intelligence, there is also no uniform definition of the term. Intelligence is often described as a general mental ability, which includes the ability to recognise rules as well as reasons, to think abstractly, to learn from experience, to develop complex ideas, to plan and solve problems. Artificial intelligence can therefore be described as the attempt to emulate human-like intelligence. AI experts think it is very probable that intelligence that is superior to anything humans can create will be artificially generated in the coming decades.
However, looking at the situation today, we can assume that it will be a very long time before algorithm performance will enable such scenarios. In a perfect world, digital assistants would not only be able to answer questions and carry out simple processes, but would also be able to learn and improve with every conversation, not only to be able to react in different contexts appropriately, but also to independently increase their scope of services and automate more complex processes.
To make this possible today, digital assistants generally use machine learning (ML) in combination with natural language processing (NLP) as described above.
Whereas today’s digital assistants use various AI methods to better process the information transmitted to them in order to do their job as successfully as possible, the knowledge database represents the “brain” of the system architecture. The system refers to information and data during the dialogue management phase, i.e. the phase in which user input is analysed. Generally speaking, the knowledge base includes keywords or phrases that also appear in existing possible answers.
However, in order that chatbots can access this knowledge, it must first be extracted from comprehensive sources and then stored in a knowledge database. This can be structured, semi-structured and unstructured information, although most systems can only retrieve data from structured documents. By combining the NLP technology described above with machine learning, there has already been an improvement in finding patterns in large volumes of data. Research has also been carried out into which systems could gain knowledge from unstructured data.
However, there is still the challenge of not only making (unstructured) data accessible to systems, but also enabling the processing and understanding of information within them in such a way that cognitive processes similar to those of humans become a real possibility. A successful project to create a “brain” that can process data holistically, would result in a truly intelligent system—at least in theory.
You may be wondering how far research has progressed on this project and when we can expect such a breakthrough in technology. We’ll take a more detailed look at that in the second part of this blog.
Until 4 October 2020 the celebrations for the German Unity Day 2020 will continue to take place in the city center of Potsdam. The state of Baden-Württemberg presents itself there as Europe's leading AI innovation center - with an interactive AI installation supported by the AI company Colugo and Bechtle. The website kreative-ki.de shows a live stream of the art installation and informs about important economic and social aspects of AI.