Home > Designing Artificial General Intelligence (Life Experience Design)

Designing Artificial General Intelligence (Life Experience Design)

If you’re reading this, I assume this is not the first time you’ve heard of artificial intelligence, or, hopefully, had conversations about artificial general intelligence. As for me, whenever the topic of AGI comes up, I can’t help but recall fond memories of my secondary school days, when we debated the question: Is God a scientist or an artist?  

These discussions at an early age enabled me to fully conceptualize the dual nature of reality. When thinking about AGI – with all the machine learning and complexity involved in developing generative assistants – I believe it is possible to design artificial general intelligence. In my research, I discovered a systemic and strategic formula for designing life experiences for artificial intelligence. 

To discuss life experience design, it is important to break down the elements of artificial intelligence into their most basic classifications and then rebuild the concept in a way that makes sense for designing artificial life When I think about complex concepts, I like to break them down into 'dust thought – particles,’ because I believe every complex concept is composed of the same fundamental particles. It is the combination of these particles with infinite possibilities of combinations that gives rise to these complex concepts. A kind of atomic thinking, if you will. Or cellular system. Cell – Tissue – Organs – System. To come up with this formula, it was important to keep in mind the comparative analysis of human intelligence and artificial intelligence in order to design the formula required for life experience design FOR and USING artificial intelligence. 

Drawing from the atomic thinking system, AGI, in its most basic form, is merely an additional artificial intelligence persona, subject to far more intricate regulations than your typical persona. So, before we can talk about designing AGI, let us discuss how to design personas and assistants. To do this, I have created a thorough classification framework for designing artificial intelligence. 

Overview of the classification framework 

This classification framework I developed for AI assistants comprises two vital dimensions, which are complexity and technology stack, the purpose of which is to systematically classify AI assistants. The objective is to introduce an appropriate classification system that is scalable, uniform, specific, and can facilitate the assessment, development, and refinement of AI assistants over any period. The classification involves both a class number to denote functional complexity and a level letter to represent the complexity of the underlying technology stack. This is very important as they would form the building blocks of the life experience formula we will discuss later on. 

Definition of Complexity in Artificial Intelligence 

I have already mentioned complexity a few times in passing in this article, but what exactly is complexity? What makes an idea, a persona, an assistant, or a phenomenon complex? More importantly, how can we define complexity in artificial intelligence? 

Complexity refers to the degree of sophistication and advancement exhibited by an AI assistant in performing tasks, understanding context, and interacting with users. It can be dissected into various components, each representing a distinct facet of the assistant’s capabilities. 

To shed more light on how designing complexity could work in artificial intelligence, I will further break down the different types of complexity and some examples to help you tell them apart. 

Types Of Complexity 

Task Complexity

Task complexity is determined by the breadth of tasks the assistant can handle, the depth of its reasoning, its ability to adapt to context, and how autonomously it can integrate with external systems. There are four types of task complexity. 

  • Single – Step Tasks: Simple, discrete tasks that do not require any contextual understanding. An example of this is providing an explanation that has been prepared in advance of an inquiry. Kind of like an FAQ bot. 
  • Multi – Step Tasks: These are tasks that require multiple steps or interactions to complete. This could be making a flight reservation, which will entail looking for dates, choosing the flights, and confirming. Here the assistant can help you in completing several tasks, but they are also predetermined and use cases prepared for. 
  • Context – Dependent Tasks: This one requires understanding the contextual factors required for assisting a person with a task. ‘Book a table’ means reserving a table in the restaurant, not purchasing the table. The assistant at this level understands the context in which the command to execute a task is given. 
  • Dynamic Tasks: Decision – making tasks that require real – time adjustment. For instance, adjusting the response given to a user based on the feedback from the user. This is the level at which most generative AI assistants operate. 

Contextual Complexity

When it comes to contextual complexity, it refers to the extent to which the assistant can interpret, maintain, and use evolving context across interactions to provide accurate, relevant, and personalized responses. Contextual complexity can also be grouped into 4 parts:

  • No Context Awareness: These are static responses that do not change regardless of the situation. An example of this is a rule – based chatbot that always gives a specific, predetermined answer to a given question.
  • Basic Context Awareness: Here the assistant has a limited understanding of context, for example, greeting a user by name based on their profile.
  • Intermediate Context Awareness: Understanding and applying context from previous interactions within a single session. For instance, remembering user preferences during a conversation.
  • Advanced Context Awareness: This refers to the ability to understand and apply long – term contextual data across multiple interactions or sessions. Recommending products based on a user’s past behavior over several months is one example of this.

Learning Complexity

This type of complexity is the ability of the assistant to not only learn from interactions with users but also how it improves itself from those interactions over time. It is the degree to which an assistant can continuously improve its capabilities and adapt behavior through user feedback, data analysis, and self – training processes. And under this complexity, there are 4 types as well.

  • No Learning: The assistant does not learn as it operates on the basis of a set of conditions predefined for it. E.g., a normal FAQ bot.
  • Basic Learning: Minimal adjustment one can make based on previous interactions, mostly within a single session. E.g., altering answers during the interaction.
  • Intermediate Learning: This type of learning occurs with adaptation of responses and recommendations made over a period through many interactions with the user. E.g., a recommendation system that builds as the user continues to use the system.
  • Advanced Learning: This type of learning is from large amounts of data without the need for supervision and can adjust according to changing patterns and evolving user needs. E.g., a machine learning model that keeps learning as it is exposed to new data.

Interaction Complexity

This refers to the richness and variety of interactions an assistant can perform. It measures how deeply and naturally the assistant can engage in activities such as multi – turn dialogues, adapting its responses to the user’s input style, and maintaining context across interactions. In this context, interaction complexity is categorized into three types.

  • Simple Interaction: The most basic kinds of interactions over text or through voice where the user has little flexibility. E.g., a text chatbot that responds only to specific commands.
  • Multi – Modal Interaction: Interaction across different modes and ease of changing from one mode to another. E.g. a virtual assistant that executes voice instructions and shows images when required.
  • Rich Interaction: Highly interactive, with the ability to handle complex dialogues, natural language processing, and emotional cues. E.g., an assistant that can not only respond naturally and flow in a conversation but also understand emotional overtones.

Autonomy Complexity

This is the extent to which the assistant functions without the control of human direction. It is the measure of how independently an assistant can act on tasks, from passively responding to prompts to proactively orchestrating external systems and making decisions. Autonomy can be divided into 3 types:

  • Manual Operation: This type of assistant cannot operate without human input and guidance. The assistant is able to answer questions only from the prepared scenarios.
  • Semi – autonomous Operation: The assistant is able to carry out some operations, although human oversight is needed. An example of this is a customer service robot that is able to address simple questions but prefers reliance on people for complicated queries by excalating those tickets to an actual human.
  • Fully Autonomous Operation: The assistant is reasonably efficient to handle a variety of activities while taking decisions independently. An AI that autonomously manages scheduling, communication, and decision – making based on learned preferences.

Dimensions of GenAI Complexity

The establishment of complexity definition in the context of artificial intelligence allows us to use it in the application of generative AI. To this effect, genAI complexity has two very distinct dimensions in which it can be designed: functional complexity and technological complexity.

Class Number: Functional Complexity

When designing the functional complexity dimension, I structured it to incorporate a class number for variable identification. This class number represents the overall functional complexity of the assistant, determined by how different types of complexities combine and interact. You may recall our earlier discussion on the types of complexity – these combinations are the foundation of the classes I developed, such as Class 1, Class 2, and so on. Below, I will outline the components of each class, which include various subtypes or levels derived from the original types of complexity.

Class 1: Basic Functionality

  • Task Complexity: Single – Step Tasks
  • Contextual Complexity: No Context Awareness
  • Learning Complexity: No Learning
  • Interaction Complexity: Simple Interaction
  • Autonomy Complexity: Manual Operation

Class 2: Intermediate Functionality

  • Task Complexity: Multi – Step Tasks
  • Contextual Complexity: Basic Context Awareness
  • Learning Complexity: Basic Learning
  • Interaction Complexity: Multi – Modal Interaction
  • Autonomy Complexity: Semi – Autonomous Operation

Class 3: Advanced Functionality

  • Task Complexity: Context – Dependent Tasks
  • Contextual Complexity: Intermediate Context Awareness
  • Learning Complexity: Intermediate Learning
  • Interaction Complexity: Rich Interaction
  • Autonomy Complexity: Semi – Autonomous Operation

Class 4: Expert Functionality

  • Task Complexity: Dynamic Tasks
  • Contextual Complexity: Advanced Context Awareness
  • Learning Complexity: Advanced Learning
  • Interaction Complexity: Rich Interaction
  • Autonomy Complexity: Fully Autonomous Operation

Class 5: Autonomous Intelligence

  • Task Complexity: Dynamic Tasks
  • Contextual Complexity: Advanced Context Awareness
  • Learning Complexity: Advanced Learning
  • Interaction Complexity: Rich Interaction
  • Autonomy Complexity: Fully Autonomous Operation

Level Letter: Technology Stack Complexity 

The second dimension of designing genAI complexity is the technology stack complexity. For this dimension, I have assigned it the first 5 letters of the alphabet with codename – Level Letter. Level Letter indicates the complexity of the technology stack used to build and deploy the assistant. Each level builds on the previous one, adding new capabilities and technologies in a gradual, methodical way. You know how on some SAAS websites you see the pricing features like basic, pro, and premium? Then the basic and pro sections have their features listed, but in the first feature of the pro they write “everything in basic” first? That is how you can think of the technology stack as building on the previous one in Level Letter. 

Level A: Simple Text – Based Prompting (Baseline) 

This refers to text – based prompts or scripts with no coding required and sometimes is structured with simple markup. 

Technology Stack and Capabilities:

  • Tools: Basic text editors, markup languages (e.g., Markdown, simple XML). 
  • Executes predefined prompts with basic structure, handling simple tasks. 
  • No scripting or coding: Focuses on text – based inputs with simple markup. 

Examples of this include FAQ bots and basic chatbots for customer inquiries, questions, and requests with static responses. 

Level B: Script – Based Logic (Incremental Complexity) 

On the B level, we start to see the introduction of basic scripting capabilities, allowing for conditional logic, simple algorithms, and basic automation. 

Technology Stack and Capabilities

  • Languages: Simple scripting languages like Python, JavaScript, or Shell scripts. 
  • Tools: Basic IDEs (e.g., VS Code, Sublime Text), command – line tools, basic APIs. 
  • Conditional logic: handles if/else conditions and loops. 
  • Simple Automation: Automates repetitive tasks based on user inputs. 
  • Data Handling: Basic interaction with data, such as reading from and writing to text files or simple databases. 

Examples of this include assistants that automate simple workflows, like sending predefined emails in email automation systems or generating reports. There are hundreds of tools like this that were in use even before the release of genAI as we have come to know it today. 

Level C: Machine Learning and API Integration (Intermediate Complexity) 

Assistants at this level incorporate machine learning and API integration, which enables them to learn from data, make informed decisions, and interact with external services. A lot of AI covers today are on this level, using other services like OCR, voice recognition, and more, and using open AI APIs to analyse the content for a more wholesome service experience. 

-Technology Stack and Capabilities

  • Languages: Python, JavaScript, with machine learning libraries like scikit – learn, TensorFlow, or PyTorch. 
  • Tools: Jupyter Notebooks, cloud – based ML platforms (e.g., Google Colab, AWS SageMaker), RESTful APIs. 
  • Machine learning models: use pre – trained models or custom models for tasks like classification or regression. 
  • API Integration: Interacts with external APIs for data retrieval or service integration. 
  • Basic NLP: Performs simple NLP tasks like sentiment analysis or language detection. 
  • Scalability: deploys on cloud platforms to handle larger datasets or more users. 

Examples of these are assistants that provide personalized recommendations, perform predictive analytics, or interact with cloud services. 

Level D: Advanced Machine Learning and Distributed Systems (Advanced Complexity) 

This level of complexity involves advanced machine learning techniques, scalable cloud architectures, and distributed computing. 

Technology Stack and Capabilities

  • Languages: Python, R, JavaScript, with advanced libraries like Hugging Face, PyTorch for deep learning, and TensorFlow Extended (TFX) for scalable ML. 
  • Tools: Kubernetes for container orchestration, Apache Kafka for real – time data streaming, cloud platforms like AWS, Google Cloud, or Azure. 
  • Advanced Deep Learning Models: Uses complex models like transformers for NLP and CNNs for image processing. 
  • Distributed Systems: processes large datasets across multiple nodes using frameworks like Apache Spark. 
  • Real – Time Data Processing: Capable of real – time analysis and decision – making. 
  • Scalable Cloud Infrastructure: deploys using containerization and orchestration for scalability and fault tolerance. 

Examples are assistants that perform real – time analysis, manage complex interactions, or support high – availability services. 

Level E: Autonomous Systems and Emerging Technologies (Cutting – Edge Complexity) 

The final dimension level involves cutting – edge technologies like AI/ML integrated with robotics, IoT, and possibly quantum computing. There are few large – scale technological hardwares capable of these levels of complexity. 

Technology Stack and Capabilities

  • Languages: Python, C++, Java, specialized languages for robotics (e.g., ROS), quantum computing languages (e.g., Qiskit). 
  • Tools: Advanced AI frameworks (e.g., OpenAI’s GPT for large – scale language models), IoT platforms, robotics development platforms (e.g., ROS, Nvidia Jetson), quantum computing platforms. 
  • AI/ML Integration with Robotics: controls and interacts with physical devices, enabling tasks like autonomous navigation. 
  • IoT Integration: Interacts with and controls a network of connected devices, supporting tasks like smart home automation. 
  • Quantum Computing: Potential for solving complex problems that classical computers can’t handle. 
  • Autonomous Decision – Making: Operates autonomously in complex, unstructured environments. 

Examples of these are autonomous drones, self – driving vehicles, and advanced smart city management systems. 

Thus far, I have explored breaking down mind dust within the atomic model, conceptualizing complexity in relation to generative AI, and examining the types and dimensional approaches used in designing life experiences for artificial intelligence. In the next article, I will delve into sample complexity combinations, progressively increasing in sophistication, until we arrive at the formula for life experience design. See you in the next stage! 

Autor

Gideon Awolesi – Senior UX and Usability Engineer

An experienced UX Designer and Usability Engineer with 8 years of expertise creating user experiences for emerging technologies in healthcare, GenAI, and Web3. Passionate about developing scalable design strategies and advancing human–computer interaction and conversation design.