An AI Chatbot for the Maldivian Travel Industry

The Maldivian travel industry is a vital part of the country’s economy, with tourism accounting for approximately 28% of the country’s GDP. In such a competitive industry, it’s important to stay ahead of the game and provide an exceptional customer experience to attract and retain tourists. With the advent of artificial intelligence (AI), the Maldivian travel industry has a new tool in its arsenal to provide unparalleled customer service. AI-powered chatbots are not new, and companies like Google, Amazon, Microsoft, and Facebook have had chatbots for years. However, OpenAI’s GPT is different and has exploded in popularity recently. In this article, we’ll explore how GPT can benefit the Maldivian travel industry. We’ve also included a proof of concept demonstration chatbot at the bottom of this article.

An Introduction to AI Chatbots

Chatbots have been around for decades, and some of the world’s largest tech companies, including Google, Amazon, Microsoft, and Facebook, have been using AI-powered chatbot solutions for years. These chatbots are typically rule-based, meaning they follow a set of predefined rules and respond to specific prompts or keywords. They use a combination of natural language processing (NLP) and machine learning techniques to understand the user’s input and provide an appropriate response.

However, GPT is different from these rule-based chatbots in several ways. GPT uses a powerful AI technology known as generative language models, which allows it to understand and generate responses based on context, rather than just following a set of predefined rules. Unlike rule-based chatbots, which are limited to responding to pre-defined prompts, ChatGPT can generate human-like responses to a wide range of inputs, making it much more flexible and versatile.

While traditional chatbots are designed to respond to specific prompts or keywords, GPT is designed to understand the user’s intent and generate a relevant response based on that intent. This makes it much more capable of handling complex and varied conversations, and provides a much more natural and engaging experience for users.

GPT: The New Kid in Town

The popularity of ChatGPT has exploded recently due to the fact that it has achieved significant breakthroughs in natural language processing (NLP) and machine learning technology. ChatGPT, as a generative language model, has shown an ability to understand and generate human-like responses, making it an attractive tool for businesses seeking to automate customer service and support.

ChatGPT has demonstrated impressive capabilities such as understanding the nuances of human language, generating coherent and context-aware responses, and being able to carry on a natural-sounding conversation with humans. This is because ChatGPT is trained on vast amounts of text data, allowing it to learn the nuances of human language and provide human-like responses to user queries.

The power and potential of GPT, generative AI, and language models are vast. With these technologies, businesses can automate customer service and support, provide personalized recommendations, and generate content at scale. GPT can be trained on specific domains and used to generate content such as product descriptions, email templates, and even news articles.

In the travel industry, for example, GPT could be used to provide personalized recommendations to travelers, help them plan their trip, answer their questions about destinations, and even assist them in booking flights and accommodations. The power and potential of GPT, generative AI, and language models are only just beginning to be fully understood and explored, and we can expect to see even more innovative use cases in the future.

Prompt Engineers: Rise of a New Field

Prompt engineering is a new field that has emerged due to the rise of generative AI and language models like GPT. It is an exciting new field that requires a unique combination of language skills, technical expertise, and creative problem-solving abilities. As the use of AI and language models continues to expand, the demand for prompt engineers is likely to grow as well. Prompt engineers are responsible for designing and crafting the prompts that a language model will use to generate responses to user input. To be a prompt engineer, one needs to have an in-depth understanding of how language models work, as well as a strong command of the language they will be working with. They also need to have a basic understanding of machine learning, data science, and programming languages.

The work of a prompt engineer involves designing, testing, and refining prompts to ensure that the language model generates high-quality responses that are accurate and relevant to the user’s input. This requires a lot of creativity, as prompt engineers must come up with different ways to phrase questions and prompts to guide the language model’s responses. Prompt engineers must be able to analyze and interpret data to refine and improve the prompts. They must also work closely with developers to integrate the prompts into the chatbot or other AI-powered system.

Proof of Concept

At Hals & Hounds, we recently completed an exciting project developing a prototype proof of concept chatbot for an imaginary resort, Euler Resort Maldives, using GPT. The chatbot was designed to assist users in checking the availability of rooms, when they provide their preferred check-in and check-out dates and room type.
The chatbot was built with GPT 3.5. Since this is just a proof of concept, the mechanism we used is far from perfect. We developed the chatbot useing four different layers, each of which serves a specific purpose, while working together to provide a seamless and helpful conversation with the user.

The first layer is called the Compliance layer, which serves as a sort of quality control mechanism. It checks both the input from the user and the output from the bot to ensure that they are relevant to the conversation and consistent with the bot’s purpose. The Compliance layer also checks the inputs and outputs against a predefined set of rules of engagement to make sure that the conversation is following a set of guidelines.

The Conversation layer is the main chatbot that converses with the user. This layer takes in the user’s inputs and generates responses based on the bot’s programming. The responses are intended to continue the conversation and provide relevant information or assistance to the user.

The third layer is the Classification layer. This layer is responsible for identifying the user’s intent and the relevant parameters of their request. For example, if the user is trying to book a hotel room, the Classification layer would identify the check-in date, check-out date, and preferred room type as the parameters needed to fulfill the request. Once the Classification layer has identified the parameters, it returns them in a format that the program can use.

The fourth and final layer is the Conversion layer. This layer takes in raw data from an API and generates a response based on the parameters identified by the Classification layer. For example, if the user is trying to book a hotel room, the Conversion layer would use the identified check-in and check-out dates, as well as the preferred room type, to generate a response that includes available rooms and their prices.

Although it works well for this purpose, there are a couple of disadvantages to this method. One of these is the higher token costs and the higher latency due to multiple API calls. GPT 4 is just around the corner and is expected to me much more powerful. Perhaps a better mechanism can be developed with GPT 4. A better mechanism would be to use a fine tuned model. By taking pre-trained GPT-3.5-Turbo language model and further training it on a specific dataset would allow it to adapt to the task better. However as of the time of this experiment, you can only fine-tune base GPT-3 models.

While there is a lot of room for improvement, the chatbot generally demonstrated high levels of operational reliability and fact fidelity. During testing, we measured the operational reliability and fact fidelity of the chatbot. Operational reliability refers to the percentage of times the chatbot was able to function correctly, without any errors, during the tests. Fact fidelity measures the accuracy of the responses generated by the chatbot, in terms of providing the correct information to the user. Although the testing wasn’t extremely comprehensive, it produced some interesting results. Our chatbot achieved an operational reliability of 93%, meaning it performed well in the majority of cases and demonstrated high accuracy, recall and precision. The chatbot also had a high fact fidelity score of 9.8/10, providing accurate information to users most of the time.

This project has shown the potential of GPT and chatbots to enhance user experiences in various industries. We look forward to exploring further possibilities and helping our clients leverage the power of AI in their businesses.

Would you like to try it out?

Fill in the form below and click proceed, to try out the Chatbot. Keep in mind that this is just a prototype proof of concept implementation of Open AI’s GPT 3.5, and is imperfect and prone to mistakes. However, GPT 4 is just around the corner and is expected to be 500 times more powerful. Once GPT 4 is released, we’ll be trying that out as well.

Try it Out!