AILib


  1. AI Intro
  2. AI and its subsets
  3. Goals for AI
  4. History/Timeline of AI
  5. ML
  6. Understanding ChatGPT and LLMs
  7. Ethics of AI
  8. Fictious AI
  9. AILib Blog

Brief Overview of AI.

To the best of people's knowledge a computer does things it has been explicitly told to do or programmed to do. However, that is not the case with Artificial Intelligence, or AI. AI is the tech that allows computers perform tasks as a human would, without any explicit programming. It's simply the act of giving a computer a literal "brain". This is done with other technologies working together.
Automation vs. AI

"Automation is the application of technology, programs, robotics or processes to achieve outcomes with minimal human input." [From here -IBM Learn 2024 This does mean that automation does use predefined rules and programming to handle and perform tasks efficiently and repeatedly. Automation is mostly sought after to perform repetitive/overwhelming tasks.

robot looking

Most commonly interchangeably used terms in the field of AI.

Algorithm vs. Model

While technically different, these terms are often used interchangeably to describe the mathematical process or structure used for AI tasks. An algorithm is the process or method, and a model is the output or result of that process, typically used in AI tasks like prediction, classification, or decision-making.

Machine Learning vs. Deep Learning

Machine Learning involves algorithms that parse data, learn from that data, and then apply what they have learned to make informed decisions while Deep Learning is a subset of ML that uses layered neural networks to analyse various factors of data. Deep learning is particularly powerful for tasks like image and speech recognition.

Neural Networks vs. Artificial Neural Networks(ANNs)

Neural Networks generally refer to systems loosely modelled after the human ain that are designed to recognize patterns. Artificial Neural Networks are a specific type of neural network used in computing, which are key components in deep learning designs.

Data Mining vs. Machine Learning

Data Mining is the process of discovering patterns and knowledge from large amounts of data. The process is exploratory. Machine Learning is a method of data analysis that automates analytical model building. It uses methods from neural networks, statistics, operations research, and physics to find hidden insights in data without explicitly being programmed for where to look or what to conclude.

Cognitive Computing vs. AI

Cognitive Computing refers to systems that mimic human interaction by interpreting speech and text, and responding in a human-like manner. AI encompasses a wider array of technologies including those used in cognitive computing but also includes systems that can perform tasks without necessarily mimicking human behaviour.

AI vs. Robotics

AI involves creating algorithms and systems that can perform tasks that typically require human intelligence. Robotics is the branch of technology that deals with the design, construction, operation, and application of robots. When robots are equipped with AI, they can perform tasks that involve a degree of autonomous decision-making. Robot and AI: While a robot is a physical machine, it often incorporates AI to perform tasks autonomously, blurring the lines between the two terms.

RPA vs. AI

Robotic Process Automation (RPA) is a type of software automation that mimics actions that humans might take, like logging into applications, entering data, completing tasks, and logging out. RPA is typically rule-based and does not learn from its actions. AI includes systems that can learn and adapt over time, improving their decision-making capabilities based on experiences or newly acquired data.

BI vs. AI

Business Intelligence typically refers to technologies, applications, and practices for the collection, integration, analysis, and presentation of business information. The focus is on static data analysis to drive business decisions. AI involves more dynamic analysis capabilities, including predictive analytics and machine learning, where the system improves its analysis over time through learning.

Predictive Analytics vs. Machine Learning

Predictive Analytics uses statistical models and forecasting techniques to understand the future. It uses historical data to identify trends and determine how likely specific outcomes are in the future. Machine Learning is a subset of AI that involves teaching a computer to learn from data and make decisions based on its learning, without being explicitly programmed. It is often used within predictive analytics but is broader in application and capability.

Expert Systems vs. AI

Expert Systems are computer systems that emulate the decision-making ability of a human expert. They use predefined rules and knowledge bases to make decisions, which means they don't learn or adapt beyond their initial programming. AI encompasses a wider range of technologies, including those that can learn and adapt autonomously, going beyond simple rule-based system

AI's subsets- ML

Machine Learning (ML) is a powerful subset of Artificial Intelligence (AI) that focuses on developing algorithms and models that enable computers to learn from data and improve their performance on tasks over time, without being explicitly programmed for each task. Here are key concepts of Machine Learning: Machine Learning Typed
Supervised Learning

This involves training a model on a labelled dataset, where the input data is paired with the correct output. The model learns to predict the output from new, unseen inputs.

Unsupervised Learning

The model learns from unlabelled data, identifying patterns, groupings, or structures within the data without explicit instructions on what to predict.

Reinforcement Learning

The model (often called an agent) learns by interacting with an environment and receiving feedback in the form of rewards or penalties. The goal is to learn a strategy that maximizes the cumulative reward.

Semi-supervised Learning

This combines a small amount of labelled data with a large amount of unlabelled data. This approach is used when labelling data is expensive or time-consuming. For example; text classification, where only a few documents are labelled, but the model learns to classify a large number of documents.

Transfer Learning

Involves taking a pre-trained model developed for one task and applying it to a different but related task. This approach leverages existing knowledge to improve performance on a new task. For example: using a model trained on a large dataset of images to help classify medical images.

Deep Learning

This is a very specialized subset of ML (Machine Learning) that uses multi-layered artificial neural networks (ANNs) to model complex patterns in large datasets. Deep learning models excel at tasks like image and speech recognition. Example: Convolutional Neural Networks (CNNs) for image classification or Recurrent Neural Networks (RNNs) for sequence data like text or time series.

Artificial Neural Networks (ANNs)

These are computational models inspired by the human ain, consisting of layers of interconnected nodes (neurons). ANNs are the foundation of deep learning and can learn to approximate complex functions by adjusting the weights of the connections between neurons. A simple ANN could be used for binary classification, while deeper, more complex ANNs are used in deep learning.

Goals of AI

"The various subfields of AI research are centred around particular goals and the use of particular tools" ~From here

ANNs
Here are the goals of AI

Reasoning and Problem-Solving: Develop algorithms that enable machines to reason logically, solve puzzles, and make decisions, incorporating methods to handle uncertainty and incomplete information.

Knowledge Representation: Create systems that represent and utilize knowledge about the world, enabling AI to make informed decisions using knowledge bases and ontologies. This goal has been achieved by certain AI models such as ChatGPT by OpenAI, Gemini, formally Google Bard by Google LLC and Microsoft Co-pilot by Microsoft.

Planning and Decision-Making: Equip AI agents with the ability to plan actions and make decisions that achieve goals or optimize outcomes, even under uncertain conditions.

Learning: Design systems that can learn from data, improving their performance over time through various forms of machine learning, including supervised, unsupervised, and reinforcement learning.

Natural Language Processing (NLP): Enable machines to understand, generate, and interact using human language, tackling tasks like speech recognition, machine translation, and text generation.

Perception: Develop AI that can perceive and interpret sensory inputs, such as vision, sound, and touch, to understand and interact with the physical world.

Social Intelligence: Create AI systems that recognize and simulate human emotions, facilitating socially intelligent interactions that are more engaging and human-like.

General Intelligence: Pursue the long-term goal of developing Artificial General Intelligence (AGI), capable of performing a wide range of tasks with the versatility and breadth of human intelligence. In other words, the ability to complete any task performable by a human on an at least equal level—is among the field's long-term goals.

History of AI

Theory of computation

1936

Alan Turing's theory suggested that machines could simulate any form of mathematical reasoning.

The Turing Test

1950

Turing published "Computing Machinery and Intelligence,"

The term "AI" was born!

1956

The Dartmouth Workshop marked the founding of AI as a field, and "Artificial Intelligence" first coined in 1956 by John McCarthy.

The Perceptron

1958

Frank Rosenblatt invented the perceptron, an algorithm for pattern recognition based on a two-layer learning computer network to copy the human brain.

AI Labs established

1960s

AI programs show early success in solving problems like checkers, algebra, and theorem proving. Hence AI Labs start in UK and US.

First AI Winter

1969

Marvin Minsky and Seymour Papert published "Perceptrons," which showed the limitations of perceptron models and led to first AI winter.

Second AI Winter

1985

AI market grows significantly, followed by the collapse of Lisp Machines in 1987, leading to a second "AI Winter."

Neural Networks on the rise.

1990s

The rise of sub-symbolic approaches like neural networks, with Yann LeCun demonstrating the success of convolutional neural networks (CNNs) in digit recognition. "

AI making progress fast.

2000s

AI starts producing verifiable results in specific applications, leading to widespread adoption across industries.

Deep Learning dominating

2012

Deep Learning starts dominating AI research and benchmarks, leading to massive industry adoption.

AI making progress fast.

2015

AlphaGo by DeepMind defeats the world champion in Go, showcasing the power of reinforcement learning and deep neural networks.

ChatGPT has been born!

2020

GPT-3 is released by OpenAI, demonstrating the capabilities of large-scale language models in generating human-like text.

The AI revolution!

2022 - Present

Massive investment in AI research and development, with billions of dollars being invested annually and significant AI-related job openings.

Understanding ML

Machine Learning algorithms

Machine Learning (ML) algorithms are fundamental to the field of artificial intelligence, providing the means through which computers can learn from and make predictions or decisions based on data. The choice of algorithm depends on the type of data available, the specific task at hand, and the desired outcome.
How Machine Learning Algorithms Are Used

- Feature Selection and Engineering: Before applying machine learning algorithms, significant effort goes into selecting the appropriate features (input variables) that make the most sense for the model to learn effectively. - Model Training: Algorithms learn from data by adjusting their parameters to minimize error between their predictions and the actual outcomes of the data. - Model Evaluation: After training, models are evaluated using metrics like accuracy, precision, recall, and F1 score for classification tasks, or mean squared error for regression tasks. - Model Optimization: Techniques such as grid search or random search are used to find the optimal settings for the model’s parameters to improve performance. - Prediction: Once trained and validated, models are used to predict outcomes on new, unseen data.

Types of Machine Learning Algorithms

Supervised Machine Learning Models

These algorithms are trained using labelled data, i.e., data which has an input paired with the correct output. The algorithm learns a model on this data to be able to predict the output associated with new inputs.

Unsupervised Machine Learning Models

These algorithms are used when the data has no labels, and the goal is to infer the natural structure present within a set of data points.

Semi-supervised Machine Learning Models

Combines a small amount of labelled data with a large amount of unlabelled data during training. Semi-supervised learning is useful when the cost of labelling data is too high.

Reinforcement Machine Learning Models

In this type of ML, the algorithm learns to make specific decisions by trying to maximize a reward signal. The learner is not told which actions to take but instead must discover which actions yield the most reward by trying them.

Examples of each mentioned type

Supervised Machine Learning Models
  1. Linear Regression: Used for predicting a continuous value. For example, predicting house prices based on features like area, age of the house, location, etc.
  2. Logistic Regression: Used for binary classification tasks, such as spam detection or predicting whether a customer will buy a product or not.
  3. Decision Trees and Random Forests: These are used for classification and regression tasks. They are particularly useful for medical diagnosis, customer segmentation, and financial analysis.
Unsupervised Machine Learning Models
  1. Clustering Algorithms (e.g., K-means, DBSCAN): Used to group a set of objects in such a way that objects in the same group are more similar to each other than to those in other groups. Common applications include customer segmentation, grouping experiment outcomes, and market segmentation.
  2. Principal Component Analysis (PCA): A dimensionality reduction technique used to reduce the dimensionality of large data sets, increasing interpretability while minimizing information loss.
Semi-supervised Machine Learning Models

It is used for tasks like image and video annotation where manual labelling is labour-intensive but unlabelled data is plentiful.

Reinforcement Machine Learning Models

It is used for developing self-learning agents like those used in video games, autonomous vehicles, or robot navigation systems.

what are llms?

Understanding ChatGPT and other LLMs

To be able to know as stated we'll explain how ChatGPT, a large language model developed by OpenAI based on the GPT (Generative Pre-trained Transformer) architecture's components/features are.

It's Core
ChatGPT is built on a neural network architecture known as a transformer(this concept was introduced on a paper, “Attention is All You Need” by Vaswani et al. in 2017), which is particularly effective for processing and generating natural language. They use self-attention mechanisms to produce input data. It involves weighing the importance of each word in a sentence, regardless of their position to enable it understand the context and relationships between words in a sentence.
chatgpt in tiles
The Training Process
This consists of pre-training where GPT models undergo a phase of pre-training where they learn from a vast tanks of text data using unsupervised learning as GPT models are typically trained using a variant of language modelling, specifically predicting the next word in a sequence given the previous words (autoregressive language modelling). This helps the model learn a broad understanding of language, including grammar, facts, and various language styles and semantics. After pre-training, the model can be fine-tuned on smaller, task-specific datasets. This stage tailors the model’s general capabilities to particular applications such as answering questions, translating text, or generating content. Supervised Learning is used in fine-tuning, the model might use labelled data specific to a task, adjusting its parameters to minimize error and optimize performance for that task.
Structural components
- Multi-Head Attention (Layers and heads): Each layer has multiple 'heads' in its attention mechanisms, allowing the model to simultaneously focus on different parts of the input sequence for better context understanding.
- Feed-Forward Neural Networks: Each layer also includes feed-forward neural networks that process the outputs from the attention mechanisms.
- Normalization and Residual Connections: These are used in each layer to help stabilize the learning process and improve the flow of gradients through the network during training. - Input Embeddings: Words are converted into numerical form to be processed by the model. Each word is represented by a unique vector in a high-dimensional space.
- Positional Encoding: Since transformers do not inherently process the order of the input (unlike RNNs), positional encodings are added to give the model information about the position of words in the input sequence.
Decoding strategies for text generation
When generating text, GPT uses various strategies to decide which word to produce next: - Greedy Decoding: Always picks the most likely next word.
- Beam Search: Considers multiple possible sequences simultaneously, choosing the sequence that has the overall highest probability.
- Top-K Sampling: Randomly picks the next word from the top K likely candidates, balancing creativity and accuracy.

Ethics with AI

As Artificial Intelligence (AI) continues to permeate various aspects of society, the ethical implications of AI technologies have become a major concern. Ethical AI development involves ensuring that AI systems are designed and deployed in ways that are fair, transparent, and beneficial to humanity.

  1. Fairness and Non-Discrimination
    AI systems should be designed to treat all individuals fairly, without discrimination based on race, gender, age, or any other protected characteristic. Developers must be aware of and mitigate biases in data and algorithms to prevent perpetuating societal inequalities.
  2. Transparency
    AI models should be transparent and their decision-making processes should be understandable to users. It is crucial, especially in high-stakes areas like healthcare or criminal justice, where decisions significantly impact individuals' lives.
  3. Privacy and Data Protection
    AI systems often rely on large datasets, including personal information. It is essential to uphold individuals' privacy rights by ensuring that data is collected, stored, and processed securely and with consent. Developers should adhere to data protection laws, such as the General Data Protection Regulation (GDPR) in Europe.
  4. Accountability
    Developers, companies, and organizations deploying AI systems must be accountable for their actions and the outcomes of their AI products. There should be mechanisms in place to identify and address harmful consequences or errors in AI systems.
  5. Safety and Security
    AI systems must be safe and secure, ensuring that they do not pose risks to users or society at large. This includes protection against hacking, unauthorized access, and other forms of malicious interference.
  6. Human-Centric Design
    AI should enhance human capabilities rather than replace them. Developers should prioritize user well-being, ensuring that AI systems empower individuals and do not lead to dehumanization or reduced human agency.
  7. Environmental Sustainability
    The environmental impact of AI, including energy consumption and resource use, should be considered in the development process. Developers are encouraged to design AI systems that are energy-efficient and contribute to sustainable practices.

Legal AI Ethic Guidelines.

  1. The European Commission’s Ethics Guidelines for Trustworthy AI
  2. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
  3. The AI Ethics Guidelines by OECD
AI ethics is not just a set of guidelines but a commitment to responsible innovation. By integrating ethical considerations into the development process, AI developers can create systems that are not only effective but also fair, transparent, and aligned with human values. This approach helps build trust in AI technologies, ensuring that they contribute positively to society.

First AI Summit 2023

ai summit 2023

AI and Fiction

Greek myths of Hephaestus and Pygmalion incorporated the idea of intelligent automata (such as Talos) and artificial beings (such as Galatea and Pandora).


Hephaestus

Hero of Alexandria created mechanical men and other automatons. He produced what may have been "the world's first practical programmable machine:" an automatic theatre.


Hero of Alexandria

Dystopian Movies about automated humanoids that can act and make decisions independently like Atlas 2024.

Atlas 2024 with Jennifer Lopez

AILib and our blog on AI.

Advice for AI Developers

1. Embed Ethics from the Start: Ethics should be integrated into every stage of AI development, from conceptualization and design to deployment and maintenance. Consider the potential ethical implications of your AI system before writing the first line of code.

2. Perform Regular Audits and Bias Testing: Regularly audit AI systems for fairness, accuracy, and bias. Use diverse datasets and conduct tests to ensure that your system does not disproportionately disadvantage any group.

3. Prioritize User Consent and Data Privacy: Always obtain informed consent from users before collecting their data. Implement strong data encryption and anonymization techniques to protect user privacy.

4. Design for Transparency: Make your AI models as transparent as possible. Provide users with explanations of how decisions are made and ensure that your system’s operations are understandable, especially when the AI impacts critical decisions.

5. Ensure Robustness and Security: Build AI systems that are robust and resilient to attacks or failures. Implement security measures to protect the system from hacking and unauthorized use.

6. Foster a Collaborative Ethical Culture: Encourage open discussions about ethics within your team and organization. Foster a culture where ethical concerns can be raised without fear of retribution, and where continuous learning about AI ethics is encouraged.

7. Engage with Stakeholders: Engage with a diverse group of stakeholders, including ethicists, legal experts, users, and affected communities, to gather input on potential ethical issues. This helps ensure that the AI system aligns with broader societal values.

8. Prepare for Accountability: Establish clear lines of accountability for AI systems. Have processes in place to address and rectify any negative impacts or errors that arise from the AI’s deployment.

9. Stay Informed on Legal and Ethical Standards: AI ethics is a rapidly evolving field. Stay informed about the latest legal requirements, ethical standards, and best practices in AI development. Engage with the broader AI ethics community to keep up with emerging issues.