This page will be updated from time to time. Last update: 26th May
AI
Artificial intelligence (AI) is a branch of computer science that aims to create machines and systems that can perform tasks that normally require human intelligence and abilities. AI can be applied to various domains and problems, such as speech recognition, computer vision, natural language processing, robotics, gaming, healthcare, and more. AI systems can learn from data and experience, reason and plan, communicate and interact, and perceive and understand their environment.
AI can be classified into different types based on its capabilities and goals. Some common types are:
- Narrow AI: This type of AI is designed to perform a specific task or function, such as playing chess, recognizing faces, or recommending products. Narrow AI systems are often based on machine learning techniques that enable them to learn from data and improve their performance over time. However, they cannot generalise beyond their domain or task and lack common sense or general intelligence.
- General AI: This type of AI is the ultimate goal of AI research, as it aims to create machines that can perform any intellectual task that a human can do. General AI systems would have human-like intelligence and abilities, such as reasoning, learning, creativity, and self-awareness. However, this type of AI is still a theoretical concept and does not exist yet.
- Super AI: (or ASI) This type of AI is an extension of general AI, as it refers to machines that can surpass human intelligence and capabilities in all aspects. Super AI systems would have superior knowledge, skills, speed, memory, and creativity than humans. Some people believe that super AI could pose an existential threat to humanity if not aligned with human values and goals.
Using Consensus provides the definition of AI as:
A complex concept involving algorithms simulating human intelligence, socio-technological apparatuses, and social actor characteristics, with varying emphasis on technical functionality or human-like thinking depending on the context.
Alternatively
It can be argued that AI exists as a machine that can undertake interpolation, and extrapolation tasks, but exhibits no creativity, no consciousness, no dreaming, never has a sense of confidence, purpose or achievement.
Algorithms (for machine learning)
What is AGI?
AGI stands for Artificial General Intelligence, which is a type of artificial intelligence that can perform any intellectual task that a human or an animal can do. Unlike narrow AI, which is designed to solve specific problems, AGI aims to have general cognitive abilities that can adapt to any situation or goal. AGI is a major goal of some artificial intelligence research, but it has not been achieved yet. Some of the challenges of creating AGI include defining and measuring intelligence, replicating human common sense and creativity, and ensuring ethical and safe outcomes.
Cognitive blindness
GPT
Generative Pre-trained Transformer architecture. The model has been trained on a diverse range of internet text, allowing it to generate human-like text in response to prompts given to it. A useful video is below that explains much about training LLMs etc
Neural Network
A neural network is a computational model that simulates the functioning of biological neurons and their connections. Neural networks are composed of artificial neurons, also called nodes, that receive inputs from other nodes or external sources, process them using a mathematical function, and produce an output that can be transmitted to other nodes or used as a final result. Neural networks can learn from data and adjust their weights and biases accordingly, using various learning algorithms. Neural networks are widely used in artificial intelligence applications, such as speech recognition, image analysis, natural language processing, and adaptive control. Neural networks can perform complex tasks that are difficult to solve using conventional programming or statistical methods.
Machine Learning
Machine learning is a field of inquiry that studies how computer systems can learn from data and improve their performance on various tasks without being explicitly programmed. Machine learning is a subfield of artificial intelligence, which aims to create machines that can imitate intelligent human behaviour. Machine learning algorithms use mathematical models and statistical methods to analyse data, identify patterns, and make predictions or decisions. Machine learning algorithms can be classified into different types based on the nature of the data and the learning process, such as supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, and deep learning. Machine learning has many applications in diverse domains, such as medicine, computer vision, natural language processing, robotics, speech recognition, agriculture, and data mining. Machine learning is also closely related to other fields of study, such as computational statistics, mathematical optimization, and neural networks. Machine learning is an active and growing area of research and innovation that has the potential to transform many aspects of human society.
What is ChatGPT actually doing?
ChatGPT is a conversational AI model developed by OpenAI based on the Generative Pretrained Transformer 3 (GPT-3) architecture. The model has been trained on a diverse range of internet text, allowing it to generate human-like text in response to prompts given to it. ChatGPT can answer questions, converse on a variety of topics, and generate creative writing pieces. ChatGPT is designed to interact in a dialogue format, which makes it possible for it to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is part of the broader field of artificial intelligence known as natural language processing (NLP), which seeks to teach computers to understand and interpret human language. ChatGPT is built using a deep learning architecture called the Transformer, which enables it to learn patterns in language and generate text that is coherent and human-like. ChatGPT is one of the most advanced language models available today and has the potential to revolutionise the way we interact with computers and digital systems. LLM A large language model (LLM) is a type of artificial intelligence (AI) algorithm that uses deep learning techniques and massively large data sets to understand, summarise, generate and predict new content. LLMs are trained on large quantities of unlabelled text using self-supervised learning, which means they learn from the data itself without human intervention or guidance. LLMs typically have billions of parameters, which are the variables that determine how the model processes the input and produces the output. LLMs use a neural network architecture called transformer, which enables them to capture long-range dependencies and complex relationships among words and sentences.
LLMs emerged around 2018 and have shown remarkable performance on a wide range of natural language processing (NLP) tasks, such as question answering, text summarization, text generation, sentiment analysis, machine translation and more. LLMs are also able to demonstrate general knowledge about the world and memorise facts from the data they are trained on. LLMs are sometimes referred to as foundation models, because they serve as the basis for further optimization and specialisation for specific domains and applications. However, LLMs also pose some challenges and risks, such as ethical, social and environmental implications. For example, LLMs may generate inaccurate, biassed or harmful content that can mislead or harm users or society. LLMs may also consume a lot of energy and resources during training and inference, which can have a negative impact on the environment. Moreover, LLMs may exhibit unpredictable or emergent behaviours that are not intended by the designers or users, such as hallucinations or abilities that were not explicitly programmed into the model. Therefore, LLMs require careful evaluation, monitoring and regulation to ensure their safe and beneficial use. NLP NLP stands for natural language processing, which is a branch of artificial intelligence that deals with the interaction between computers and human languages. NLP aims to enable computers to understand, analyse, generate and manipulate natural language texts or speech. Some of the applications of NLP include machine translation, speech recognition, sentiment analysis, information extraction, text summarization, question answering and chatbots. NLP is a challenging and multidisciplinary field that requires knowledge and skills from linguistics, computer science, mathematics and statistics. NLP involves various tasks and subfields, such as: - Tokenization: splitting a text into smaller units called tokens, such as words or punctuation marks. - Morphology: analysing the structure and formation of words, such as stems, prefixes and suffixes. - Syntax: analysing the grammatical structure and rules of sentences, such as parts of speech and dependency relations. - Semantics: analysing the meaning and logic of words and sentences, such as synonyms, antonyms and entailment. - Pragmatics: analysing the context and purpose of language use, such as speech acts and implicatures. - Discourse: analysing the structure and coherence of longer texts or conversations, such as paragraphs and dialogues. - Phonetics: analysing the sounds and pronunciation of speech, such as vowels and consonants. - Phonology: analysing the patterns and rules of sounds in a language, such as stress and intonation. - Prosody: analysing the rhythm and melody of speech, such as pitch and tone. NLP relies on various techniques and methods to perform these tasks and subfields, such as: - Rule-based systems: using predefined rules and dictionaries to process natural language based on its structure and grammar. - Statistical methods: using mathematical models and algorithms to learn from data and make predictions based on probabilities and frequencies. - Machine learning: using artificial neural networks and other learning algorithms to automatically learn from data and improve performance based on feedback. - Deep learning: using advanced neural networks with multiple layers and complex architectures to perform complex natural language tasks with high accuracy. NLP is a rapidly evolving and expanding field that has many current challenges and future opportunities. Some of the challenges include: - Dealing with ambiguity, variability and diversity of natural language in different domains, genres, styles and dialects. - Handling noisy, incomplete or inconsistent data from various sources and formats, such as web pages, social media posts or speech recordings. - Ensuring robustness, scalability and efficiency of NLP systems in real-world scenarios with large-scale data and limited resources. - Ensuring reliability, validity and fairness of NLP systems in terms of their outputs, outcomes and impacts on users and society.
Comments
Post a Comment