AI Connections 2024: Part 2: Technologies and Taxonomies

In Part 2 of the nine-part series on AI Connections 2024, we build a taxonomy of technologies and describe each in turn

AI Connections 2024: Part 2: Technologies and Taxonomies
South Island, New Zealand | © 2000 Gary Easterbrook

1. Recap

Recall that in the Introduction (part 1) the purpose of the series was to assemble a coherent terrain map of AI, as a matter of record, to contribute to the logical organisation of a vast amount of research and rapid progress in AI.

We created a terrain map (below) from multiple sources, acknowledged that it was somewhat imperfect and overlapping in several areas, and noted that it could be improved during the year.

We also discussed the history of AI and the development of these technologies, including much more complex technology stacks, and subject maps.

The intention here is to keep the discussion high-level.

2. Terrain map

The suggested taxonomy is as follows: (many of these areas overlap):

  • Cognitive computing – (some sources expressly delineate CC as not part of AI)
  • Computer vision
  • Distributed Artificial Intelligence (DAI)
    • Single-agent systems
    • Multi-agent systems
    • AI chatbots, companion AI chatbots
  • Expert systems
  • Future worlds
    • Metaverse environments (education, gaming, virtual reality)
  • Fuzzy logic
  • Machine learning (ML)
    • Deep learning – reinforcement learning, etc
    • Large language models, transformer architecture, attention, etc
    • Neural networks
  • Natural language processing (NLP)
    • Speech translation - different to NLP but both used in combination [1], [2]
    • Chatbots
  • Robotics
    • Brain-computer interface robotics
    • Humanoid and non-humanoid robotics
    • Multi-agent systems
    • Companion AI robotics
    • Industrial robotics (e.g. agricultural, medical, nuclear cleanup, drones, nanobots)

We also noted that industrial uses don't always map cleanly to a technology taxonomy, many uses combine tools and systems from a number of subheads in the taxonomy, with non-AI components.

Let us therefore map out the technologies in each area to derive research questions to monitor in 2024.

3. Cognitive computing

Cognitive computing describes AI systems that simulate human thought for supplementing and supporting human cognition [3].

This can involve analysing context, the environment, intentions, semantics, and other inputs that we might use to solve problems, and then leveraging the outputs to aid humans.

Though cognitive computing (‘CC’) is a subset of AI, conceptually it has a different approach to general AI, though in practise, many AI and CC tools are used together.

Generative AI systems, such as ChatGPT, combine natural language processing (NLP), that enables computers to ‘interpret’ human language, together with CC that mimics human cognitive functions, such as learning, reasoning, and problem-solving [4].

Whereas AI takes control to focus on a given problem, CC acts more like an assistant to aid in decision making. CC systems analyse data, and strategise, using interactive, contextual, and iterative tools, and processes of adaptive learning, to enable humans to improve accuracy, clarity, and data focus, for decision making [5]. CC does not bypass or supersede the human element but acts more like an aid.

Charles Roe, writing for Dataversity, has a very good brief history of cognitive computing and carefully delineates between AI and CC [6].

According to Vijay Kanade at spiceworks, the top five applications for CC are Education; Operations research; Human-factors engineering (e.g., usability in highly complex equipment); Human resource management, and Medicine (e.g., brain-machine interfaces, and intelligent prosthetics) [7].

Cognitive computing applications are also currently used in voice assistants, facial recognition, self-driving cars, and smart home devices.

Drew Robb at Infosec cautions around the limitations of cognitive AI, noting that it is hard to fully understand the many ways adversaries will use these tools in the coming years, and that misuse is evident within firms even now [8], referencing his colleague, Keatron Evans, with a useful article on how to securely use tools like ChatGPT [9].

Generally, the use of these tools adds another layer of required mindfulness and practises to security protocols within companies.

4. Computer vision

Computer vision (‘CV’) is a field in AI focused on developing automated systems that train computers to interpret the visual world in a similar way to humans. CV systems might be trained at different levels to recognise certain features of still and moving images, such as edge detection, pattern detection, object detection, facial recognition, feature matching, and classifying images. These are then collated into a coherent output.

How AI systems classify images is richly complex, and well covered by Gaudenz Boesch here: ‘A Complete Guide to Image Classification in 2024’ [10], noting that “Image classification is probably the most important part of digital image analysis. It uses AI-based deep learning models to analyze images with results that, for specific tasks, already surpass human-level accuracy."

Machine learning has supercharged CV but is more general. ML processes data based on earlier precedents and does not necessarily involve visual data [11], [12], [13].

There is undoubtedly an explosion of progress in the field: Papers with Code registers 40,498 papers on computer vision [14], and there are regular top ten or best papers annually, with links here for 2020-2023, along with suggested all-time ‘must-read’ papers in the field [15], [16] [17], [18], [19], [20].

ArXiv also lists papers from the last five days, back to over 700, in a useful link for research purposes [21], and Springer publishes the International Journal of Computer Vision, calling it a “rapidly growing field” [22].

Though the field has moved rapidly on, a good introductory analysis in 2020 notes that “Machine learning and computer vision (have) innovative applications in engineering, medicines, agriculture, astronomy, sports, education (etc)” [23], categorised and clustered in Table form.

A November 2023 paper entitled ‘Computer-vision based analysis of the neurosurgical scene – A systematic review’ demonstrates how far CV has progressed in neurosurgery, and notes that few prior studies have been dedicated to the field [24].

5. Distributed artificial intelligence

    • Single-agent systems
    • Multi-agent systems
    • AI chatbots (non-robot forms, companion AI)

A paper entitled ‘Distributed artificial intelligence: Taxonomy, review, framework, and reference architecture’, proposes an excellent taxonomy in a diagram of DAI research: “This taxonomy covers the AI workflow, paradigms, infrastructure, management, and applications of DAI” [25]. A decent understanding of the field can be gained from this paper.

What is noteworthy is several subheads under which some of the complexities are explained well. Representative examples are:

i) Communication Optimization - which is the ability to synchronise and share information between different workers, optimising and reducing communication volume and frequency,

ii) Distributed Data Management - in which there are many applications that require DDM such as “multi-robotic, multi-groups, autonomous vehicles systems, etc." Most of the DDM approaches use voting methods “to select the final decision either locally or as a group”.

Other areas include Distributed Training, Task Scheduling - for available resources, and Distributed Inference, which includes optimising models for deployment so that the systems can start producing “predictions, decisions, or insights”.

5.1 Single-agent systems

When there is only one agent in an environment, it is a single-agent system (SAS). The pros and cons of using a single agent instead of a multi-agent system (MAS), plus some examples of either, are well described here [26].

In Stuart Russell and Peter Norvig’s classic textbook, ‘Artificial Intelligence: A Modern Approach, 4th ed.’ [27], adopted by over 1500 schools worldwide [28], they note that single agent programs are commonly designed as “a collection of autonomous sub-agents” within the field of multi-agent systems ([27], p.61).

5.2 Multi-agent systems

Continuing with Russel and Norvig’s textbook, they note that MAS bring with it a complete set of issues not encountered with SAS, concerning agents and how they must consider the ways in which their own actions interact with those of other agents in the MAS environment (Ibid., p. 601).

In a deterministic environment, such as multiple robots on a car assembly line, and a finite number of variable actions in that environment, the interactions can be more precisely defined. In a more variable environment, such as multiple product transfer robots in a warehouse, moving around semi-autonomously, and with variable actions, such as navigating past other robots on a random basis, the interactions are considerably more complex, with more complex, awareness, anticipation, and decision action potentials to account for. One humorous recent account followed the daily actions of a ‘Robot Wrangler’ employee in a warehouse, having to find robots that go hide [29]. Other examples of robot behaviour can be found here [30], [31].

5.3 Deep reinforcement learning (RL)

RL is a necessary part of DAI systems, therefore MAS and RL are often ‘lumped together’ in the literature. RL is a branch, or subset, of ML and is discussed in more detail under that subhead below, but the following is useful for now.

Wiki defines RL as “an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent ought to take actions in a dynamic environment in order to maximize the cumulative reward” [32]. Deep reinforcement learning is similarly defined as “a subfield of machine learning that combines reinforcement learning (RL) and deep learning (DL)” [33], where DL involves layers of iteration, described in Part One of this series in more detail.

Combining MAS and RL provides a rich foundation for capabilities that are used extensively in commercial and research activities. Four good papers on capabilities and uses are as follows:

‘Multi-Agent Deep Reinforcement Learning for Multi-Robot Applications: A Survey’, which discusses the literature on environments in which multiple agents not only learn from their own experiences but also from each other, and applications for this functionality in multi-robot systems [30].

‘Deep Reinforcement Learning for Multi-Agent Interaction’ which discusses research problems that include “scalable learning of coordinated agent policies and inter-agent communication; reasoning about the behaviours, goals, and composition of other agents from limited observations; and sample-efficient learning based on intrinsic motivation, curriculum learning, causal inference, and representation learning” [34].

‘Beyond Robustness: A Taxonomy of Approaches towards Resilient Multi-Robot Systems’ in which the authors analyse how “resilience is achieved in networks of agents and multirobot systems that are able to overcome adversity by leveraging system-wide complementarity, diversity, and redundancy—often involving a reconfiguration of robotic capabilities to provide some key ability that was not present in the system a priori” [35].

 ‘Deep Reinforcement Learning from Human Preferences’. This concerns the training of complex novel behaviours to enable “sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments” [36].

5.4 AI chatbots

Some chatbots are tied in with robotics and natural language processing (NLP), and there are some chatbots that are multi-agent systems.

IBM defines chatbots as a computer program that simulates human language with an end user, and though not all are equipped with AI (some are non-AI, rules-based), most use “conversational AI techniques like natural language processing (NLP) to understand the user’s questions and automate responses to them” [37]. Some commentators caution that there are differences between AI chatbots, conversational AI, and agents [38], [39], though the differences seem quite fine.

An excellent source that explains chatbots clearly and some of their uses can be found here in ‘10 of the Most Innovative Chatbots on the Web’ [40].

In non-robotic companion AI, Replika is one ‘AI Companion who cares’ and they invite users to create their own companion [41]. This is an area that is gaining significant traction with many uses, including an aid to assist elderly people in loneliness. Some people attest to the benefits of emotional attachment to their chatbots [42], [43].

In commercial uses, helpdesks are already using chatbots and measuring their effectiveness [44], and in other research, the extent to which they could replace human agents is discussed [45].

6. Expert systems

Expert systems have a rich history, generally acknowledged to have been developed in the 1970s by computer scientist and founder of Stanford’s Knowledge Systems Laboratory, Edward Feigenbaum.

In his 1988 paper ‘Knowledge Processing: From File Servers to Knowledge Servers’, he wrote “Scientists and technologists, like all creators, must dream, must put forth a vision, else relegate their work to near-pointless incrementalism. My dream is about the future of artificial Intelligence research and development over the next several decades, and the knowledge systems that can be produced thereby to assist the modern knowledge worker”[46].

He wrote of an explosion of expert systems in the late 1970s, in medicine, “engineering, in manufacturing, in geology, in molecular biology, in financial services, in diagnostic servicing of machinery, in military signal processing, and many other areas.”

The document is extraordinarily prescient, speaking to the present-day reality, around the evolution of the natural interface, and a second era, where it will be commonplace for systems to interact with users in human language, “within the scope of the systems’ knowledge” (see also [47], [48]).

An expert system normally has 5 components: i) a knowledge base; ii) an inference system; iii) a user interface; iv) a learning module; v) and an explanation module [49], [50].

In a 2017 paper, Haocheng Tan categorised expert systems into four sub-categories: Rule-Based; Framework-Based; Fuzzy Logic-Based; and Neural Network-based [51].

Elsevier has a useful ‘most downloaded articles’ on Expert Systems with Applications, from anomaly detection of financial fraud, to self-driving cars, traffic forecasting, sentiment prediction from social media platforms, forest classification modelling, ECG analysis, and sign language machine translation [52].

Most expert systems are designed to assist humans in critical roles. Among the top examples of expert systems are:

MYCIN - the earliest successful medical implementation, to assist in the diagnosis and treatment of bacterial infections, particularly in the blood, that could recommend medications based on the patient’s weight.

XCON - a digital configurator for computer systems

PXDES - identifies the type and stage of lung cancer

DENDRAL - predicts the molecular structure of chemicals (for all see [53], [54], [55], [56]).

What does AI bring to expert systems? there are three broad areas that can be enhanced i) the acquisition and presentation of knowledge, ii) the database of facts and rules, and iii) the reasoning engine to answer questions and solve domain-specific problems. Dave Balroop, CEO of TechUnity, explains it well in this LinkedIn article [57].

Russell and Norvig’s classic textbook, ‘Artificial Intelligence: A Modern Approach, 4th ed.’, has a section on the historical development of expert systems up to 1988 when almost all major US corporations were developing them and investing in AI, just prior to the onset of an AI winter. Many of the systems failed to deliver, according to Russell and Norvig, because they could not learn from experience ([27], p.24). Over time, systems were developed in ‘standardized knowledge representation’ that assisted in the creation of new expert systems (p. 338). Undoubtedly, AI has greatly moved expert systems forward, including the ability to learn, a major step that back-propagation helped to deliver.

7. Future worlds

    • Metaverse environments (education, gaming, virtual reality)

Before looking at the technologies involved, a few comments on the status are in order. There is no shortage of commentary that carries the phrase ‘metaverse’ and the word ‘failure’ in the same breath. As Kate Wagner reported in The Nation in July 2023, “If you ask kids what kinds of spaces they find themselves in, they won’t say Horizon Worlds. They’ll say Roblox (the controversial monetized game-design platform), Minecraft (an open world building video game)… and Fortnite (a player-versus-player combat game with a great deal of customization)” [58]. Yet, some argue, plausibly, that these virtual worlds are already transforming into “full-service metaverse platforms” [59].

Despite Meta burning through $46.5bn on the metaverse, (at a time when Zuckerberg’s net worth expanded by more than that), Paulo Confino at Yahoo Finance reported that he (Zuckerberg) didn’t seem any less committed (to the metaverse), and quoted him as saying it was one of the most interesting questions over the coming decades “how to bring together our physical and digital worlds into a coherent and good experience” [60].

Apparently many companies are betting on the long term, expecting the metaverse industry to grow into a $5trn market by 2030 [61]. In 2022, it was reported that the kids weren’t buying the metaverse, suspicious, perhaps, of all the likely micro-charges [62]. In 2021, writing in the Guardian, Rob Davies warned of video game loot boxes being linked to gambling, with calls for in-game rewards to be classed as betting products to protect children [63]. On Feb 4 2024, he wrote again in the Observer about gamers being lured into casino-style betting to win in-game bonuses [64].

It looks like the technology has been leveraged to go straight to the money. In fairness, there are other narratives, like the Covid story by Adrienne So in Wired, ‘My Family Is Trapped in the Metaverse - Virtual reality isn’t great, but it’s a lot better than dealing with everything else out there’ [65].

Though the term was coined by Neil Stephensen in his book ‘Snow Crash’ [66], as Matthew Ball writes in his book ‘The Metaverse - and how it will revolutionize everything[67], Stephenson conceived it as a kind of refuge from ‘Franchise-organised quasi-national entities’ and ‘burbclaves’ (‘suburban enclaves’).

Stephenson cautioned against putting too much meaning into his term, and the concept of the metaverse goes back much further, conceivably to before the novel ‘Do Androids Dream of Electric Sheep?’ by Philip K. Dick (which became the classic movie ‘Blade Runner’), and, of course, to William Gibson’s ‘cyberspace matrix’, in his novels ‘Neuromancer’ [68], ‘Count Zero’ [69], and ‘Mona Lisa Overdrive’ [70]. Not forgetting also, Helge T. Kautz’s ‘Farnham’s Legend[71], which was the beginning to the X-Universe immersive space-world saga.

The Neuromancer trilogy was one of the main influences on the Matrix movies, as evident in the Wikipedia entry for Count Zero: “The Cyberspace Matrix, a synergistically linked computer network of databases that encompasses all information on Earth, has become home to sentient beings. But most of humanity remains unaware” [72].

We might also go back to Jean Baudrillard and his concept of Hyperreality, particularly in his book ‘Simulacra and Simulation’ [73], although his earlier works developed the concept. He contended that humans had an inability of consciousness to distinguish reality from a simulation of reality, especially in technologically advanced postmodern societies [74], [75].

Thierry Jourquin, Co-founder of XRIntelligence, also checks Frank Rose, and his book ‘The Art of Immersion’ [76], for capturing the key drivers in the transition from being spectators and  passive consumers of mass media, to participants, creating and selecting experiences to immerse ourselves in at will. Jourquin’s article is an indepth examination of Hyper Reality at the core of the Metaverse [77].

If we conceive of the metaverse as simply a digital world where companies try to sell us stuff, the partial truth of that belies some of the truly exceptional work going on in developing the tools, the compute, and the immersive experiences that will be a significant part of our human futures, including gaming, education, virtualisation for scientific research, and who knows? maybe universes where we go for everything from news to movies to holidays to interactive entertainment.

The metaverse, it seems, is “mostly intangible experience” Matthew Ball writes, (it is) “a persistent network of virtual worlds, data, and supporting systems”. It will be physical devices, (including creating the environments), he believes, that will act as “the gateway to accessing and creating these experiences… this fact provides the device manufacturers and operators with significant soft and hard power” ([67] p.165).

The computing, scheduling, and rendering back-ends to create metaverse environments require compute engines; ChessMax offers a Github site with 47 ‘awesome’ game engines which demonstrates the deep innovations in 2D and 3D graphics engines and interfaces available [78].  One of the most prominent is Unity, which provides a framework and developer tools for both 2D and 3D games. Unity 6 is due out in 2024 [79]. There are several others, Bevy is interesting, being open source, and built on Rust, calling itself a “refreshingly simple data-driven game engine” [80]. GEARS is also interesting in that it is intended for scientific use: “The Game-Engine-Assisted Research platform for Scientific computing (GEARS) is a visualization framework developed … to perform simulations and on-the-fly data exploration in virtual reality (VR) environments” [81].

In all of this, the parallel processing capabilities of graphics processing units (GPUs) is a key enabler, in everything from rendering engines to blockchain, to encryption, to climate pattern prediction, aerospace, energy, and a host of other industries and research applications [82], [83]. Two good investigations of the graphics and visualisation ecosystems can be found here [84], [85]. The Virtual Reality and Augmented Reality landscape, including headsets, is well described, briefly, here [86], and in more detail, here [87], [88], [89], [90].

8. Fuzzy logic

Lotfi A. Zadeh, born in Azerbaijan, introduced the term fuzzy sets in a 1965 paper [91], and is generally regarded as the creator of fuzzy logic, though Wiki recognises a form called infinite-valued logic that goes back to 1920 [92].

Fuzzy logic allows, in mathematical terms, a generalisation, for example, (as Wiki uses) from ‘John is bold’ to ‘John is somewhat bold’: “Without this generalization, we can only describe the value of a person's boldness as a number (e.g., 0.5) or as an interval (e.g., {0.25, 0.75}). With this generalization, one can express the semantic meaning of "somewhat bold" as a fuzzy set in the domain of 0 and 1” [93]. Fuzzy logic enables AI systems to reason in a more human-like way, and to represent that mathematically [94], [95], with the loss of some specificity, which may not be important in some applications.

Zadeh was a remarkable man, Wiki has a good account of his story: he studied in Tehran and left for the USA after waiting months for his papers to come through. After completing his doctorate at Columbia, he taught there for ten years and became a full professor in 1957, before being offered a position at the University of California, Berkeley.

He made important contributions to “linear system theory and classical control systems, and fuzzy sets, fuzzy logic, and related science and technology”. He also contributed to the AI community, including at the Dartmouth Workshop that coined the term "artificial intelligence", and became a frequent contributor to other AI conferences and publications such as AI Magazine [93].

Many warm testaments to Zadeh and interviews can be found online, a true legend [96], [97], [98], [99].

9. Machine learning (ML)

    • Deep learning – reinforcement, etc
    • Large language models, transformer architecture, attention, etc
    • Neural networks

9.1 Deep learning, multi-agent reinforcement

Firstly, deep learning is a subset of machine learning, the ‘deep’ referring to the use of multiple layers in the network to progressively extract higher-level features from the raw input. It may be edges, patterns, or shapes, (etc) in each level that progressively builds up a learned object from a single instance. Wiki explains it well: “deep learning refers to "computer-simulate" or "automate" human learning processes from a source (e.g., an image of dogs) to a learned object (dogs)... each level learns to transform its input data into a slightly more abstract and composite representation.” Deep learning is now most often based on multi-layered artificial neural networks, but they can be created without the use of AI [100].

Reinforcement learning is well described in this paper:  RL is a “machine-learning method in which one agent or a group of agents maximises its long-term return through repeated interaction with its environment. Agents are not told what actions to take and must learn their optimal behaviour via trial-and-error… Multiagent reinforcement learning (MARL) differs from single-agent systems foremost in that the environment’s dynamics are determined by the joint actions of all agents in the environment, in addition to the uncertainty already inherent in the environment”[101]. MARL is much more complex than single agent reinforcement learning (SARL).

The reinforcement aspect refers to the process created to maximise the expected cumulative reward through iteration, a tuneable algorithmic function as explained in this paper: “the shaping of a reward function plays a crucial role in an agent’s learning. Designing an optimal reward function is not a trivial task” [102].  Understanding reward functions is well explained here [103], with some open questions and limitations discussed here [104].

9.2 Large language models (LLMs)

LLMs process enormous amounts of data, mostly from datasets either publicly available, such as (for example) code repositories like Github, or from web scraping, social media user behaviour, input typing etc, captured by social media companies, useful semi-public datasets, or private company datasets.

The textual inputs (now also video and audio), are tagged with up to billions of weights, using Transformer Architecture, a type of deep learning structure based on artificial neural networks (ANNs), that utilise representation (also called feature) learning. Transformers were proposed by Ashish Vaswani and members of the Google Brain team, in 2017, in a paper entitled ‘Attention Is All You Need’ [105]. Attention is a critical mechanism in Generative AI, which is itself, a subset of machine learning (ML). As noted in Part 1 to this series, there were earlier attention-like innovations.

The paper proposed, however, a novel neural network architecture that would replace traditional recurrent neural networks (RNNs) and convolutional neural networks (CNNs) with an attention-based mechanism.

Feature learning can be supervised (inputs are labelled), unsupervised (inputs are not labelled), or self-supervised (the machine constructs input-label pairs from each datapoint), and the processes are referred to as ‘training’. Training is an iterative process, teaching and fine-tuning algorithms to build prediction models with weights, and how to extract features that are relevant to specific goals.

Transformers are a mechanism of machine-based attention that mimic cognitive attention. Nvidia has a simple definition: “A transformer model is a neural network that learns context and thus meaning by tracking relationships in sequential data like the words in this sentence” [106]. They can be optimised in large-scale training hardware, or, according to Apple, in no hardware at all [107], and a good description of how they work can be found in this reference [108]. LLMs require additional components, input control, chatbots, and other features, and there are several types of models, for example, multimodal (to include images), fine-tuned, domain-specific, and zero-shot, requiring no additional training [109].

The process of learning calculates ‘soft’ weights for each input which involves tokenising data and embedding the tokens in a context window. There are ‘hard’ weights also, normally invariant, based on the design of complex neurons that exhibit invariant capabilities [110], which can decrease model training time and improve performance. [111], [112].

Each tagged token can be a word, a sub-word, or a character, and the window enables contextual tokens to be processed around a target token, such that, for example, “the rain in Spain stays mainly in the ___’ is likely to predict ‘plain’ as the last word, from George Bernard Shaw’s play, Pygmalion (1912). This depends, of course, on the dataset from which the predictability was trained.

LLMs that have been trained on one or more datasets, can then be asked a question in human language, known as a prompt, that may be a few 1000s of characters long and entered in a context window. A response is known as a completion, whereby prompts and completions can continue in the round, albeit with some limitations, which are important to understand. Context windows are a sort of dynamic sliding record of tokens (words and sub-words) to maintain a limited context, that make up the conversation, where, once the token limit has been reached, the window ‘slides forward’ with the conversation, and older turns in the conversation are phased out. This is where it is important to phrase questions to optimise the likelihood of an optimum completion.

GPT-3, the predecessor to the current ChatGPT-4, was limited to 4000 tokens (around 3000 words), whereas with GPT-4 the context window has been extended up to 32,000 tokens in the largest model, around 25,000 words. This allows for stronger context-relevance and a ‘more comprehensive understanding of the input”. It also significantly improves the ability to maintain context over longer documents and conversations. [113].

ChatGPT-5 is rumoured to be in training and of a magnitude ten times that of GPT-4. As discussed in more detail in The Fragile Sea Bi-weekly newsletter TFS#02, the number of parameters is thought to be well over 1trn. Parameters are explained well here [114]

LLMs are evolving rapidly, this section is virtually out of date the minute it is written. A recent (Feb 20, 2024) paper ‘Large Language Models: A Survey’ is an excellent source for the latest updates [115].

ChatGPT is not the ‘only game in town’. Google developed GLaM, a “a trillion weight model that can be trained and served efficiently (in terms of computation and energy use)” [116], still based, however, on the Transformer model architecture, but other architectures also exist [117], [118].

More recently, the Meta tie-up with Microsoft Azure and ver 2 of LlaMa is very popular, with models suitable for laptops and phones, stand-alone [119], [120], and with optimised chips coming for phones in late 2024 [121], [122], [123].

There is also huge excitement generated by the announcement by OpenAI of Sora, the text to video model that creates superb 1-minute videos in hi res, from phone to laptop size, up to a maximum 1920 × 1080 pixels. By speaking in a paragraph of text, Sora will create a realistic video, but it can do more, for example, it can create an ‘end shot’, and reverse back in video one minute, four times simultaneously, to four different story start points [124].

9.3 LLMs challenges

The challenges are under rapid improvement in many aspects, so that it is difficult to keep up with innovations in resolving issues. By the time they are down on paper here, they may well be substantially improved. It is not clear if some of them can be fully resolved with current aproaches yet (and therefore will require innovative ways to address), but it is still early days.

The training of sequential tasks in artificial neural networks brings with it many challenges, one being catastrophic forgetting. According to this paper, in 2016: “This phenomenon, termed catastrophic forgetting occurs specifically when the network is trained sequentially on multiple tasks because the weights in the network that are important for task A are changed to meet the objectives of task B. [125].

The phrases ‘continual learning’ or ‘lifelong learning’ relate to the ability of humans and animals to “to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan”. The biological factors are well known, such as “structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration”, however, continuous learning has represented a “long-standing challenge for machine learning and neural networks and, consequently, for the development of artificial intelligence”, as reviewed in this 2019 paper [126].

More recently, several solutions have been proposed and developed, one believed to operate in vivo (Latin: ‘within a living organism’) by “adding a context-dependent gating signal, such that only sparse, mostly non-overlapping patterns of units are active for any one task” [127]. Another solution being to model sleep: “Interleaving new task training with periods of off-line reactivation, mimicking biological sleep, mitigated catastrophic forgetting by constraining the network synaptic weight state to the previously learned manifold, while allowing the weight configuration to converge towards the intersection of the manifolds representing old and new tasks. The study reveals a possible strategy of synaptic weights dynamics the brain applies during sleep to prevent forgetting and optimize learning” [128].

Both are examples of the continued symbiosis between neuroscience and artificial intelligence.

9.4 Emergent properties

Current systems appear to be producing emergent outputs that resemble human meaning. Emergent properties of a system can be characterised as unpredictable behaviours that are not purposefully programmed but emerge as a consequence of the model [129].

Another way of looking at it is to define emergent abilities as “abilities that are not present in smaller-scale models but are present in large-scale models; thus they cannot be predicted by simply extrapolating the performance improvements on smaller-scale model” [130].

Take the following three references:

i)  Jason Wei, who works on ChatGPT, and tracks the number of identified emergent capabilities of LLMs, on a fascinating website; the current number is 137 [130];

ii) Stanford Researchers say that LLMs emergent capabilities are largely a mirage [131], [132];

iii) ‘Emergent Abilities When AI LLMs Learn Stuff They Shouldn't Know’ by David Ramel. He notes a well-known limitation, that is hallucinations, where “advanced systems like ChatGPT and GPT-4 from OpenAI, confidently respond to a query with false or even wholly made-up information. Nobody is exactly sure how or why they happen, though experts have proffered various explanations” [133].

Consider the refreshingly frank 155-page ‘Sparks of Artificial General Intelligence: Early experiments with GPT-4’, which is current in terms of the most recent ChatGPT model available, but dated (March 2023) in terms of the developments and papers since, on the same model. Nevertheless, it is well worth reading, noting from the abstract: “We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all these tasks, GPT-4’s performance is strikingly close to human-level performance… we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system” [134].

Artificial general intelligence (AGI) is a state where the ability of the machine is indistinguishable from human activities. Doctoral Researcher Karthik Subbiah Ravishankar, defines it well, noting that AGI should be able to perform any intellectual task a human can. This includes understanding context, adapting to new situations, solving unfamiliar problems, and even creative thinking. Rabvishankar notes, however, that AGI should go beyond just mimicking human responses. It should extend to independent thought, an awareness of "the significance of its actions", and adaptability to new situations. He suggests AGI should also be able to display emotions or (at least) "an understanding of human values and ethics” [135]. This is a considerable stretch towards being virtually human.

Consider also the Quanta Magazine article, by Stephen Ornes, noting “Large language models like ChatGPT are now big enough that they’ve started to display startling, unpredictable behaviors” [136]. He notes work by scientist Yi Tay, (originally at Google, now at Reka), and others, on a method called “chain-of-thought prompting “which asks the model to explain its reasoning, (that) may help researchers begin to investigate why emergence occurs at all."

9.5 Other issues to resolve with LLMs

The models are power hungry, at least to train. In June 2023, it was noted that the previous generation of ChatGPT, ver3, had consumed roughly the equivalent power of 200 Germans in 2022 [137], and emitted around 500 metric tons of carbon, just in training [138]. A number of initiatives are under way, both in the architectures and in the data set sizes, to remediate the issue [138].

Furthermore, large language models need enormous amounts of data.  A paper by Epoch Labs notes: “Our work suggests that the current trend of ever-growing ML models that rely on enormous datasets might slow down if data efficiency is not drastically improved or new sources of data become available” [139]. An MIT article by Tammy Xu suggests that we may run out [140], and a Microsoft Senior Solutions Architect, Sharon Smith, asks whether running out of Machine Learning data will become a bottleneck for scaling large language AI Models [141].

In addition, the questions of who owns that data and the rights to scrape that data from the web are also relevant for discourse. Google recently submitted to the Australian Government that “AI systems should be able to mine publishers’ work unless companies opt out”, as reported by Josh Taylor in the Guardian [142]. This is a subject tracked in Part 6: Ethics, oversight and legal, of this series.

9.6 Neural nets

An artificial neural network is a collection of coded elements in computing architecture based on a neural model of how a human brain functions. The processing units are ‘nodes’, similar to neurons. An outstanding paper on ‘Neuroscience-Inspired Artificial Intelligence’ by the CEO of DeepMind, Demis Hassabis, and colleagues, can be found here [143].

The elements are explained in more detail in Part 1 to this series, including the history of multiple innovations and discoveries in both neuroscience and artificial intelligence, that have combined to deliver innovation and progress in AI. Chris Nicholson at Pathmind gives a good explanation [144], and different angles are worth the time here [145], [146].

10. Natural language processing (NLP)

    • speech translation is different to NLP but both used in combination
    • chatbots (see above)

Wiki defines NLP as “an interdisciplinary subfield of computer science and linguistics… primarily concerned with giving computers the ability to support and manipulate human language”, and discusses the history, from Symbolic NLP in the 1950s, through Statistical NLP in the 1990s, to Neural NLP in the present day [147].

Alan Turing, the “English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist” [148], proposed what became known as the Turing Test, in a published paper, ‘Computing Machinery and Intelligence’ (October 1950). The test was an attempt to define a standard for a machine to be called "intelligent". A computer could be said to "think" if a human interrogator could not tell it apart, through conversation, from a human being.

In his paper, he firmly stated his belief: “The original question, ‘Can machines think!’ I believe to be too meaningless to deserve discussion. Nevertheless, I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. I believe further that no useful purpose is served by concealing these beliefs” [149].

DeepLearning.ai have a useful ‘Complete Guide To Natural Language Processing’ and note that it is now an integral part of daily life, from customer service chatbots, to interpreting or summarizing electronic health records, to Amazon’s Alexa and Apple’s Siri [150].

The article also notes chatbots as divided into two sub-categories - database query, and conversational. Companion AI is discussed elsewhere, and in the TFS#01 bi-weekly newsletter, and is gaining considerable traction, in meeting some of the needs of loneliness and companionship - three excellent sources here [151], [152] [153].

11. Robotics

    • Brain-computer interface robotics
    • Humanoid and non-humanoid robotics
    • Multi-agent systems - some crossover here with DAI above
    • Chatbots, companion AI (some of which aren't robots)
    • Industrial robotics (e.g. agricultural, medical, nuclear cleanup, drones, nanobots)

11.1 Brain-computer interface robotics (BCI)

On January 30 2024, Elon Musk's Neuralink announced they had implanted a brain chip in the first human. For people who are ‘locked in’ and can only communicate, for example, by flicking their eyelids, this is a development that could allow computer programs, software, and robotic motion, to be controlled by severely paralyzed people, using only the neuron spikes from their thoughts.

Eliza Strickland, in the IEEE Spectrum magazine, writes of a competitor, Synchron, that is close to commercialisation of a system that does not require surgical implant. As Strickland writes “The company has already put 10 of its innovative brain implants into humans during its clinical trials, and is pushing ahead to regulatory approval of a commercial system" [154].

Electrical and chemical signals are sent around the body through neuron spikes, the brain-computer interface (BCI) developments mean that people can engage in limited activities through thought alone.

11.2 Humanoid and non-humanoid robotics

This will no doubt be the year of Robots. IEEE Spectrum did a full feature in January, in particular, an extended article by Evan Ackerman on Robots: ‘Year of the Humanoid’ [155], which connects back to his November 2023 Spectrum article ‘A Robot for Humanity’ [156].

Some of these vids on YT are extraordinary in the human-like flexibility and range of movement achieved [157], [158], [159].

11.3 Multi-agent systems (some crossover here with DAI above)

MAS systems have to coordinate the movements of multiple robotics devices in an environment, and are highly complex. MAS must be able to manage where more than one robotic device may be making decisions (such as movements around a warehouse) that could impact other robots, therefore sensory and other navigation aids and decision potentials must be sufficiently robust.

11.4 Chatbots, companion AI (some of which aren't robots)

Included under robotics here, on account of the development of physical robots, but they carry multiple AI components, including NLP, chatbots , and agents. This field has some particularly interesting research over the past few years. The references in precious sections are sufficient, however, of further interest are the following studies on their use [160], [161] [162], [163].

11.5 Industrial robotics (e.g. agricultural, medical, nuclear cleanup, drones, nanobots)

Agricultural use of robotics and AI is booming [164], [165], described as a ‘transforming the industry’ [166].  Similarly,   for medical and surgical applications [167], [168], [169], [170], [171], including nanobots [172], [173].

Combining AI with drones and other mobile devices has multiple applications, in precision farming, security, logistics, search and rescue, military, and monitoring [174], [175], [176], [177], [178]. Similarly in nuclear energy and waste cleanup [179], [180], [181], [182].

12. Conclusion

This concludes Part 2: Technologies of this nine-part series AI Connections, 2024. Below, please find the Part 2 research questions and initial sources.

In Part 3 we will look at the commercial exploitation of AI and some of the extraordinary benefits that are already evident in the use of the most up to date AI systems and tools.     

I would delighted to welcome you to the free bi-weekly newsletter here.

Thank you for reading, Take care, Brent.

Research ‘briefs’ on AI technology for 2024

Identified areas:

  • Technology progress – new innovations in models, and architectures
  • Datasets and data volumes – new, smaller, more focused
  • Modular add-ons, optimisations, error checking
  • Limitations and solutions – e.g., hallucinations, gender bias, (other) bias     

Research Hypotheses / Objectives

  • Are the limitations unresolvable? What changes were made to solve them?
  • Are there clear paths to future technology from current technologies?
  • What technological breakthroughs have been made this year?

Sources

  • Many papers will be referenced, however, ‘The GPT-4 System Card’ [183], and ‘Sparks of Artificial General Intelligence: Early experiments with GPT-4’ [134], are a good place to start, even if events have moved rapidly on.
  • Books, not limited to:
    • ‘Atlas of AI’, Kate Crawford [184]
    • ‘God | Human | Animal | Machine’, Meghan O’Gieblyn [185]
    • ‘Artificial intelligence: a modern approach’, 4th Ed., Stuart J. Russell, Peter Norvig [27]

 

Room 5000 - a short story I wrote in 1981 about a computer becoming sentient

TFS#09 - What do Neoliberalism, Friederich Hayek, markets, algorithms, AI, and creativity have in common? We delve into these subjects for more connections

TFS#08 - What are the correlations between growth, debt, inflation, and interest rates? In this business edition of The Fragile Sea, we go hunting in corporate, institutional, and academic papers for insights in the face of heightened political, economic, corporate, and environmental risks, and more besides!

TFS#07 - We discuss a mixing pot of subjects - the state of AI, will there be food shortages this summer? good things and not so in energy, pandemics - are we ready? some remarkable discoveries, and more!

TFS#06 - Can AI produce true creativity? We discuss music, art and creativity, why human creators have a strong future, and why we must assure that they do

TFS#05 - Practical guides for implementing AI, in other news, a revisit on CRISPR, and events in spaceweather, fake publishing, spring blossoms, and more!

TFS#04 - Has Artificial General Intelligence (AGI) arrived already? We look at the goings on in AI over the past four months

TFS#03 - AGI and machine sentience, copyright, developments in biotech, space weather, and much more 

TFS#02 - Sam Altman's $7trn request for investment in AI, economic outlooks, and happenings in biotech, robotics, psychology, and philosophy.

TFS#01 - Economic outlooks, and happenings in AI, social media, biotech, robotics, psychology, and philosophy.

AI 2024 Series

Part 1: Introduction / History of AI

Part 2: Technologies

Part 3: Commercial uses

Part 4: Neural architectures and sentience - coming soon!

Part 5: Meaning, Language, and Data

Part 6: Ethics, oversight and legal

Part 7: Media and social

Part 8: Future humanity

© 2023 The Fragile Sea - all rights reserved. For permissions, please contact us



[1]:         N. Kumawat, ‘Difference between Natural Language Processing and Speech Recognition’, InsideAIML, 2021. https://insideaiml.com/blog/differencebetween-natural-language-processing-and-speech-recognition-1059

[2]:         Stone Water, ‘Is speech recognition part of NLP?’, Quora, 2022. https://www.quora.com/Is-speech-recognition-part-of-NLP

[3]:         K. Yasar, A. S. Gillis, and B. Botelho, ‘What is Cognitive Computing? | TechTarget’, Enterprise AI, 2023. https://www.techtarget.com/searchenterpriseai/definition/cognitive-computing

[4]:         M. Frąckiewicz, ‘ChatGPT and the Future of Cognitive Computing: Advancing Human-like AI Capabilities’, TS2 SPACE, Apr. 30, 2023. https://ts2.space/en/chatgpt-and-the-future-of-cognitive-computing-advancing-human-like-ai-capabilities/

[5]:         Clickworker, ‘What is Cognitive Computing? Definition, Advantages, Technology’, clickworker.com, 2023. https://www.clickworker.com/ai-glossary/cognitive-computing/

[6]:         C. Roe, ‘A Brief History of Cognitive Computing’, DATAVERSITY, May 08, 2014. https://www.dataversity.net/brief-history-cognitive-computing/

[7]:         V. Kanade, ‘Cognitive Science: The Study of Human Mind and Intelligence’, Spiceworks, 2023. https://www.spiceworks.com/tech/artificial-intelligence/articles/what-is-cognitive-science/

[8]:         D. Robb, ‘ChatGPT and the strengths and limitations of cognitive AI | Infosec’, Oct. 27, 2023. https://resources.infosecinstitute.com/topics/industry-insights/chatgpt-strengths-limitations-of-cognitive-ai/

[9]:         K. Evans, ‘AI Best Practices: Securely Use Tools like ChatGPT | Infosec’, May 10, 2023. https://resources.infosecinstitute.com/topics/security-awareness/ai-best-practices/

[10]:       G. Boesch, ‘A Complete Guide to Image Classification in 2024’, viso.ai, Jan. 30, 2023. https://viso.ai/computer-vision/image-classification/

[11]:       Arm, ‘What is computer vision’, Arm | The Architecture for the Digital World, 2023. https://www.arm.com/glossary/computer-vision

[12]:       Kili, ‘Key Differences Between Computer Vision and Machine Learning’, Kili Technology, 2023. https://kili-technology.com/data-labeling/computer-vision/computer-vision-and-machine-learning-differences

[13]:       Simplilearn.com, ‘What Is Computer Vision: Applications, Benefits and How to Learn It’, Simplilearn.com, Apr. 28, 2021. https://www.simplilearn.com/computer-vision-article

[14]:       Papers with Code, ‘Papers with Code - Computer Vision’, 2023. https://paperswithcode.com/area/computer-vision

[15]:       L. Bouchard, ‘Top 10 Computer Vision Papers 2020’, KDnuggets, 2021. https://www.kdnuggets.com/top-10-computer-vision-papers-2020

[16]:       L. Bouchard, ‘Top 10 Computer Vision Papers of 2021’, Louis-François Bouchard, aka What’s AI, Jan. 01, 2022. https://www.louisbouchard.ai/top-10-cv-papers-2021/

[17]:       W. X. Zhao et al., ‘A Survey of Large Language Models.’ arXiv, Jun. 29, 2023. http://arxiv.org/abs/2303.18223.

[18]:       Voxel51, ‘ICCV 2023 Survival Guide: 10 Computer Vision Papers You Won’t Want to Miss | LinkedIn’, Sep. 08, 2023. https://www.linkedin.com/pulse/iccv-2023-survival-guide-10-computer-vision-papers-you-wont-want/

[19]:       Y. Hosni, ‘March 2023’s Breakthroughs in Computer Vision: A Selection of Top Papers’, Medium, Jun. 05, 2023. https://yousefhosni.medium.com/march-2023s-breakthroughs-in-computer-vision-a-selection-of-top-papers-fccb42c26d3a

[20]:       D. Bandopadhyay, ‘Top 10 Must-Read Papers on Computer Vision | LinkedIn’, Aug. 06, 2023. https://www.linkedin.com/pulse/top-10-must-read-papers-computer-vision-debiprasad-bandopadhyay/

[21]:       ArXiv, ‘Computer Vision and Pattern Recognition’, Dec. 15, 2023. https://arxiv.org/list/cs.CV/recent

[22]:       Springer, ‘International Journal of Computer Vision’, SpringerLink, 2023. https://link.springer.com/journal/11263

[23]:       A. I. Khan and S. Al-Habsi, ‘Machine Learning in Computer Vision’, Procedia Computer Science, vol. 167, pp. 1444–1451, Jan. 2020, https://www.doi.org/10/gs89j6

[24]:       F. Buyck et al., ‘Computer-vision based analysis of the neurosurgical scene – A systematic review’, Brain and Spine, vol. 3, p. 102706, Nov. 2023, https://www.doi.org/10/gs89j8

[25]:       N. Janbi, I. Katib, and R. Mehmood, ‘Distributed artificial intelligence: Taxonomy, review, framework, and reference architecture’, Intelligent Systems with Applications, vol. 18, p. 200231, May 2023, https://www.doi.org/10/gtgvb7

[26]:       Modelize.ai, ‘Pros and cons of using one AI agent vs. multi-agents’, 𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨, Sep. 28, 2023. https://medium.com/aimonks/pros-and-cons-of-using-one-ai-agent-vs-multi-agents-b7cb3b369139

[27]:       S. J. Russell and P. Norvig, Artificial intelligence: a modern approach, Fourth edition. Hoboken: Pearson, 2021.

[28]:       University of California, ‘1549 Schools Worldwide That Have Adopted AIMA’, Nov. 06, 2023. https://aima.cs.berkeley.edu/adoptions.html

[29]:       L. Young, ‘Companies Brought in Robots. Now They Need ‘Robot Wranglers.’ - WSJ’, Jan. 30, 2024. https://www.wsj.com/business/robots-taking-jobs-still-need-humans-warehouses-63bc0306?mod=hp_minor_pos15

[30]:       J. Orr and A. Dutta, ‘Multi-Agent Deep Reinforcement Learning for Multi-Robot Applications: A Survey’, Sensors, vol. 23, no. 7, Art. no. 7, Jan. 2023, https://www.doi.org/10/gs83vb

[31]:       Future Farming, ‘10.000 ha in 6 months, a robot that has proved its worth! - Future Farming’, Feb. 01, 2024. https://shorturl.at/lqwFJ

[32]:       Wikipedia, ‘Reinforcement learning’, Wikipedia. Feb. 04, 2024. https://en.wikipedia.org/w/index.php?title=Reinforcement_learning&oldid=1203170541.

[33]:       Wikipedia, ‘Deep reinforcement learning’, Wikipedia. Jan. 28, 2024. https://en.wikipedia.org/w/index.php?title=Deep_reinforcement_learning&oldid=1199844804.

[34]:       I. H. Ahmed et al., ‘Deep Reinforcement Learning for Multi-Agent Interaction.’ arXiv, Aug. 02, 2022. https://www.doi.org/10.48550/arXiv.2208.01769

[35]:       A. Prorok, M. Malencia, L. Carlone, G. S. Sukhatme, B. M. Sadler, and V. Kumar, ‘Beyond Robustness: A Taxonomy of Approaches towards Resilient Multi-Robot Systems.’ arXiv, Sep. 25, 2021. http://arxiv.org/abs/2109.12343.

[36]:       P. Christiano, J. Leike, T. B. Brown, M. Martic, S. Legg, and D. Amodei, ‘Deep reinforcement learning from human preferences.’ arXiv, Feb. 17, 2023. http://arxiv.org/abs/1706.03741.

[37]:       IBM, ‘What is a chatbot? | IBM’, 2024. https://www.ibm.com/topics/chatbots

[38]:       Zendesk, ‘Chatbots vs. conversational AI: What’s the difference?’, Zendesk, Jan. 26, 2024. https://www.zendesk.com/blog/chatbot-vs-conversational-ai/

[39]:       ADA, ‘Chatbot vs. AI Agent: What’s the difference and why does it matter?’, Feb. 18, 2024. https://www.ada.cx/posts/chatbot-vs.-ai-agent-whats-the-difference-and-why-does-it-matter

[40]:       D. Shewan, ‘10 of the Most Innovative Chatbots on the Web’, WordStream, Jan. 25, 2024. https://www.wordstream.com/blog/ws/chatbots

[41]:       Replika, ‘Replika’, replika.com. https://replika.com

[42]:       K. Chayka, ‘Your A.I. Companion Will Support You No Matter What’, The New Yorker, Nov. 13, 2023. https://www.newyorker.com/culture/infinite-scroll/your-ai-companion-will-support-you-no-matter-what.

[43]:       K. Collins, ‘AI as Your BFF? A New Wave of Chatbots Want to Get Personal With You’, CNET, Oct. 09, 2023. https://www.cnet.com/tech/computing/ai-as-your-bff-a-new-wave-of-chatbots-want-to-get-personal-with-you/

[44]:       M. Adam, M. Wessel, and A. Benlian, ‘AI-based chatbots in customer service and their effects on user compliance’, Electron Markets, vol. 31, no. 2, pp. 427–445, Jun. 2021, https://www.doi.org/10/ghp3rf

[45]:       Michela, ‘Chatbots Vs. Live Agents: Will AI Replace Live Agents?’, 3C Contact Services, Apr. 14, 2023. https://3ccontactservices.com/will-chatbots-replace-live-agents-in-customer-support/

[46]:       E. A. Feigenbaum, ‘Knowledge Processing: From File Servers To Knowledge Servers’, 1988, https://stacks.stanford.edu/file/druid:fx225yf9651/fx225yf9651.pdf.

[47]:       B. Lutkevich, ‘What Is an Expert System? | Definition from TechTarget’, Enterprise AI, 2023. https://www.techtarget.com/searchenterpriseai/definition/expert-system

[48]:       Javatpoint, ‘Application of AI - Javatpoint’, www.javatpoint.com, 2023. https://www.javatpoint.com/application-of-ai

[49]:       Simplilearn, ‘What Are Expert Systems In AI? Complete Guide’, Simplilearn.com, Jan. 20, 2023. https://www.simplilearn.com/tutorials/artificial-intelligence-tutorial/what-are-expert-systems-in-ai

[50]:       V. Samarth, ‘Top 5 Real-World Applications of an Expert System in AI’, Emeritus India, Aug. 23, 2023. https://emeritus.org/in/learn/expert-system-in-ai/

[51]:       H. Tan, ‘A brief history and technical review of the expert system research’, 2017. https://www.researchgate.net/publication/320080019_A_brief_history_and_technical_review_of_the_expert_system_research

[52]:       Elsevier, ‘Most Downloaded Articles - Expert Systems with Applications’, 2023. https://www.journals.elsevier.com/expert-systems-with-applications/most-downloaded-articles

[53]:       Yashpatel, ‘What Are Expert Systems In AI?’, Medium, Nov. 09, 2023. https://medium.com/@laners.org/what-are-expert-systems-in-ai-c1e1b3356ec3

[54]:       D. Leonard-Barton and J. J. Sviokla, ‘Putting Expert Systems to Work’, Harvard Business Review, Mar. 01, 1988. https://hbr.org/1988/03/putting-expert-systems-to-work.

[55]:       Rajeshkumar, ‘Top 10 Expert Systems’, scmGalaxy, Jun. 06, 2023. https://www.scmgalaxy.com/tutorials/top-20-expert-systems/

[56]:       G. Learning, ‘Expert Systems in Artificial Intelligence (AI)’, Medium, Feb. 11, 2022. https://medium.com/@mygreatlearning/expert-systems-in-artificial-intelligence-ai-bdf4d7226ad3

[57]:       D. Balroop, ‘AI Expert systems’, Jan. 25, 2023. https://www.linkedin.com/pulse/ai-expert-systems-dr-dave-balroop/

[58]:       K. Wagner, ‘Lessons From the Catastrophic Failure of the Metaverse’, Jul. 03, 2023. https://www.thenation.com/article/culture/metaverse-zuckerberg-pr-hype/.

[59]:       A. Lee, ‘How EA plans to compete with Fortnite and Roblox in the metaverse’, Digiday, Feb. 02, 2023. https://digiday.com/marketing/how-ea-plans-to-compete-with-fortnite-and-roblox-in-the-metaverse/

[60]:       P. Confino, ‘Mark Zuckerberg’s $46.5 billion loss on the metaverse is so huge it would be a Fortune 100 company—but his net worth is up even more than that’, Yahoo Finance, Oct. 27, 2023. https://finance.yahoo.com/news/mark-zuckerberg-46-5-billion-174955420.html

[61]:       J. Vainilavičius, ‘Innovation dies where monopolies thrive: why Meta is failing at metaverse’, Cybernews, Dec. 30, 2022. https://cybernews.com/editorial/innovation-dies-where-monopolies-thrive-meta-metaverse/

[62]:       C. Lin, ‘If the metaverse is the future of social media, teens aren’t convinced’, Fast Company, Apr. 11, 2022. https://www.fastcompany.com/90740073/if-the-metaverse-is-the-future-of-social-media-teens-arent-convinced

[63]:       R. Davies, ‘Video game loot boxes linked to problem gambling, study shows’, The Guardian, Apr. 01, 2021. https://www.theguardian.com/society/2021/apr/02/video-game-loot-boxes-problem-gambling-betting-children.

[64]:       R. Davies, ‘‘It’s rotting young people’s brains’: the murky world of gambling in video games’, The Observer, Feb. 04, 2024. https://www.theguardian.com/society/2024/feb/04/its-rotting-young-peoples-brains-the-murky-world-of-gambling-in-video-games.

[65]:       A. So, ‘My Family Is Trapped in the Metaverse’, Wired, Jan. 26, 2022. https://www.wired.com/story/stuck-in-the-metaverse/.

[66]:       N. Stephenson, Snow crash, Rreissued. London: Penguin Books, 2011.

[67]:       M. Ball, The metaverse: and how it will revolutionize everything, First edition. New York, NY: Liveright Publishing Corporation, a division of W.W. Norton & Company, 2022.

[68]:       W. Gibson, Neuromancer, Ace trade ed. New York: Ace Books, 2000.

[69]:       W. Gibson, Count Zero. New York: Ace Books, 1987.

[70]:       W. Gibson, Mona Lisa overdrive. New York: Bantam Books, 1988.

[71]:       H. T. Kautz, Farnham’s Legend. EGOSOFT GmbH, 2005.

[72]:       Wikipedia, ‘Count Zero’, Wikipedia. Nov. 22, 2023. https://en.wikipedia.org/w/index.php?title=Count_Zero&oldid=1186305887.

[73]:       J. Baudrillard, Simulacra and simulation. Ann Arbor: University of Michigan Press, 1994.

[74]:       S. Gill, ‘Introducing Jean Baudrillard’s concept of hyperreality.’, desertofthereal, Jun. 01, 2020. https://medium.com/desertofthereal/introducing-jean-baudrillards-concept-of-hyperreality-3ec73df477eb

[75]:       M. Poster, ‘Baudrillard_Simulacra and Simulations’, 1988. https://web.stanford.edu/class/history34q/readings/Baudrillard/Baudrillard_Simulacra.html

[76]:       F. Rose, The art of immersion: how the digital generation is remaking Hollywood, Madison Avenue, and the way we tell stories, Pbk. ed. New York: Norton, 2012. https://www.amazon.com/Art-Immersion-Generation-Remaking-Hollywood-ebook/dp/B004J35KQI/ref=sr_1_1?crid=5B0GZOBWTAFC&keywords=the+art+of+immersion&qid=1707048634&s=books&sprefix=the+art+of+immersion%2Cstripbooks-intl-ship%2C205&sr=1-1.

[77]:       T. Jourquin, ‘Hyper Reality at the core of the Metaverse | LinkedIn’, Aug. 16, 2022. https://www.linkedin.com/pulse/hyper-reality-core-metaverse-thierry-jourquin/

[78]:       ChessMaxx, ‘ChessMax/awesome-game-engines.’ Jan. 29, 2024. https://github.com/ChessMax/awesome-game-engines.

[79]:       Unity, ‘Real-Time 3D Development Platform & Editor| Unity’, Feb. 02, 2024. https://unity.com/products/unity-engine

[80]:       Bevy, ‘Bevy Engine’, 2024. https://bevyengine.org//

[81]:       B. K. Horton et al., ‘Game-Engine-Assisted Research platform for Scientific computing (GEARS) in Virtual Reality’, SoftwareX, vol. 9, pp. 112–116, Jan. 2019, https://www.doi.org/10/gf4kmm

[82]:       S. Gossett, ‘12 Parallel Processing Examples to Know | Built In’, Oct. 21, 2022. https://builtin.com/hardware/parallel-processing-example

[83]:       J. R. Ph.D, ‘Level Up: How Video Games Revolutionized Parallel Computing’, Medium, Aug. 17, 2023. https://itnext.io/level-up-how-video-games-revolutionized-parallel-computing-6473af870b92

[84]:       S. M. Schöbel and J. M. Leimeister, ‘Metaverse platform ecosystems’, Electron Markets, vol. 33, no. 1, p. 12, Apr. 2023, https://www.doi.org/10/gsnkwv

[85]:       Y. Zhao et al., ‘Metaverse: Perspectives from graphics, interactions and visualization’, Visual Informatics, vol. 6, no. 1, pp. 56–67, Mar. 2022, https://www.doi.org/10/gtgtx3

[86]:       E. Torma, ‘Understanding the VR/AR Landscape & Its Major Players — Sharethrough’, May 11, 2022. https://www.sharethrough.com/blog/understanding-the-vr-ar-landscape-its-major-players

[87]:       A. K. Kar and P. S. Varsha, ‘Unravelling the techno-functional building blocks of metaverse ecosystems – A review and research agenda’, International Journal of Information Management Data Insights, vol. 3, no. 2, p. 100176, Nov. 2023, https://www.doi.org/10/gskx6s

[88]:       Y. Cho and K. S. Park, ‘Designing Immersive Virtual Reality Simulation for Environmental Science Education’, Electronics, vol. 12, no. 2, Art. no. 2, Jan. 2023, https://www.doi.org/10/gtgtzd

[89]:       A. M. Al-Ansi, M. Jaboob, A. Garad, and A. Al-Ansi, ‘Analyzing augmented reality (AR) and virtual reality (VR) recent development in education’, Social Sciences & Humanities Open, vol. 8, no. 1, p. 100532, Jan. 2023, https://www.doi.org/10/mb99

[90]:       P. Cipresso, I. A. C. Giglioli, M. A. Raya, and G. Riva, ‘The Past, Present, and Future of Virtual and Augmented Reality Research: A Network and Cluster Analysis of the Literature’, Frontiers in Psychology, vol. 9, 2018, https://www.doi.org/10/ggdbhr

[91]:       L. A. Zadeh, ‘Fuzzy sets’, Information and Control, vol. 8, no. 3, pp. 338–353, Jun. 1965, https://www.doi.org/10/fcf6rc

[92]:       Wikipedia, ‘Fuzzy logic’, Wikipedia. Feb. 01, 2024. https://en.wikipedia.org/w/index.php?title=Fuzzy_logic&oldid=1201597950.

[93]:       Wikipedia, ‘Lotfi A. Zadeh’, Wikipedia. Jan. 02, 2024. https://en.wikipedia.org/w/index.php?title=Lotfi_A._Zadeh&oldid=1193237051.

[94]:       Simplilearn, ‘The Fascinating Role of Fuzzy Logic in Advancing AI’, Simplilearn.com, Aug. 21, 2023. https://www.simplilearn.com/fuzzy-logic-in-ai-article

[95]:       Emerging India Analytics, ‘Exploring The Intriguing Impact Of Fuzzy Logic On The Advancement Of Artificial Intelligence | LinkedIn’, Sep. 27, 2023. https://www.linkedin.com/pulse/exploring-intriguing-impact-fuzzy-logic/

[96]:       K. Guliyeva, ‘Lotfi A. Zadeh Phenomenon in the Development of the World Artificial Intelligence (Ai) System’, Jan. 2021, https://www.academia.edu/83235033/Lotfi_A_Zadeh_Phenomenon_in_the_Development_of_the_World_Artificial_Intelligence_Ai_System.

[97]:       M. Jafarov, ‘The Untold Story of Modern AI: How Lotfi Zadeh Forever Altered Our World | LinkedIn’, May 05, 2023. https://www.linkedin.com/pulse/untold-story-modern-ai-how-lotfi-zadeh-forever-altered-max-jafarov/

[98]:       IEEE, ‘Interview with Professor Lotfi A. Zadeh’, Nov. 27, 2018. https://ieeexplore.ieee.org/ielx5/10207/6052357/06052364.pdf?tp=&arnumber=6052364&isnumber=6052357&ref=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8=

[99]:       T. S. Perry, ‘Lotfi Zadeh and the Birth of Fuzzy Logic - IEEE Spectrum’, Jun. 01, 1995. https://spectrum.ieee.org/lotfi-zadeh

[100]:     Wikipedia, ‘Deep learning’, ‘. Oct. 03, 2023. https://en.wikipedia.org/w/index.php?title=Deep_learning&oldid=1178398312.

[101]:     A. Wong, T. Bäck, A. V. Kononova, and A. Plaat, ‘Deep multiagent reinforcement learning: challenges and directions’, Artif Intell Rev, vol. 56, no. 6, pp. 5023–5056, Jun. 2023, https://www.doi.org/10/gtgt22

[102]:     A. Dayal, L. R. Cenkeramaddi, and A. Jha, ‘Reward criteria impact on the performance of reinforcement learning agent for autonomous navigation’, Applied Soft Computing, vol. 126, p. 109241, Sep. 2022, https://www.doi.org/10/gtgt9d

[103]:     E. J. Michaud, A. Gleave, and S. Russell, ‘Understanding Learned Reward Functions.’ arXiv, Dec. 10, 2020. https://www.doi.org/10.48550/arXiv.2012.05862

[104]:     S. Casper et al., ‘Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback.’ arXiv, Sep. 11, 2023. https://www.doi.org/10.48550/arXiv.2307.15217

[105]:     A. Vaswani et al., ‘Attention Is All You Need.’ arXiv, Dec. 05, 2017. http://arxiv.org/abs/1706.03762.

[106]:     R. Merritt, ‘What Is a Transformer Model?’, NVIDIA Blog, Mar. 25, 2022. https://blogs.nvidia.com/blog/2022/03/25/what-is-a-transformer-model/

[107]:     Machine Learning Research, ‘Deploying Transformers on the Apple Neural Engine - Apple Machine Learning Research’, Aug. 11, 2023. https://machinelearning.apple.com/research/neural-engine-transformers

[108]:     Y. Gavrilova, ‘Transformers in ML: What They Are and How They Work’, Serokell Software Development Company, 2023. https://serokell.io/blog/transformers-in-ml

[109]:     S. M. Kerner, ‘What is a large language model (LLM)? – TechTarget Definition’, WhatIs.com. https://www.techtarget.com/whatis/definition/large-language-model-LLM

[110]:     ChainlessCoder, ‘The importance of invariance in AI 🤖’, Medium, Jan. 25, 2022. https://medium.com/@ChainlessCoder/the-importance-of-invariance-in-ai-563b8b2d60bd

[111]:     H. Kvinge, T. H. Emerson, G. Jorgenson, S. Vasquez, T. Doster, and J. D. Lew, ‘In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?’, 2022, https://proceedings.neurips.cc/paper_files/paper/2022/file/d36dfcdb14473a8526111c221660f2ab-Paper-Conference.pdf.

[112]:     G. Benton, M. Finzi, P. Izmailov, and A. G. Wilson, ‘Learning Invariances in Neural Networks from Training Data’, in Advances in Neural Information Processing Systems, 2020, vol. 33, pp. 17605–17616. https://proceedings.neurips.cc/paper/2020/hash/cc8090c4d2791cdd9cd2cb3c24296190-Abstract.html.

[113]:     Kristian, ‘GPT-4 Prompt Engineering: Why Larger Context Window is a Game-Changer’, Mar. 27, 2023. https://www.allabtai.com/gpt-4-prompt-engineering-why-larger-context-window-is-a-game-changer/

[114]:     T. Keary, ‘What is the Role of Parameters in AI?’, Techopedia, Nov. 28, 2023. https://www.techopedia.com/experts/what-is-the-role-of-parameters-in-ai

[115]:     S. Minaee et al., ‘Large Language Models: A Survey.’ arXiv, Feb. 20, 2024. https://www.doi.org/10.48550/arXiv.2402.06196

[116]:     A. M. Dai and N. Du, ‘More Efficient In-Context Learning with GLaM’, Dec. 09, 2021. https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html

[117]:     Nimdzi Insights, ‘The Language AI Alphabet: transformers, LLMs, generative AI and ChatGPT | MultiLingual’, Apr. 20, 2023. https://multilingual.com/the-language-ai-alphabet-transformers-llms-generative-ai-and-chatgpt/

[118]:     A. Masood, ‘Exploring Generative AI Models: VAEs, GANs, Diffusion, Transformers, and NeRFs | LinkedIn’, 2023. https://www.linkedin.com/pulse/exploring-generative-ai-models-vaes-gans-diffusion-adnan-masood-phd-/

[119]:     J. Rose, ‘How to Run Llama 2 Locally on Mac, Windows, iPhone and Android – AI StartUps Product Information, Reviews, Latest Updates’, Dec. 17, 2023. https://cheatsheet.md/llm-leaderboard/how-to-install-local-llama

[120]:     A. Truly, ‘LLaMA 2 guide: Meta AI’s open source large language model explained’, Android Police, Jan. 24, 2024. https://www.androidpolice.com/llama-2-guide/

[121]:     J. Laird, ‘Top Android Phones Set to Pack Serious AI Power in 2024’, Tech.co, Aug. 24, 2023. https://tech.co/news/android-phones-ai-power

[122]:     SamMobile and A. I. Shaik, ‘3nm Dimensity 9400 to launch in late 2024 to compete with Exynos 2500’, SamMobile, Feb. 01, 2024. https://www.sammobile.com/news/3nm-dimensity-9400-launch-q4-2024-compete-exynos-2500/

[123]:     D. Lumb, ‘Qualcomm’s ‘Holy Grail’: Generative AI Is Coming to Phones Soon - CNET.’ https://www.cnet.com/tech/mobile/generative-ai-is-coming-to-phones-next-year-thanks-to-qualcomm-chips/

[124]:     OpenAI, ‘Sora: Creating video from text’, Feb. 17, 2024. https://openai.com/sora

[125]:     J. Kirkpatrick et al., ‘Overcoming catastrophic forgetting in neural networks.’ DeepMind, 2016. https://www.doi.org/10.1073/pnas.1611835114

[126]:     G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, and S. Wermter, ‘Continual lifelong learning with neural networks: A review’, Neural Networks, vol. 113, pp. 54–71, May 2019, https://www.doi.org/10/gfvx7d

[127]:     N. Y. Masse, G. D. Grant, and D. J. Freedman, ‘Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization’, Proc. Natl. Acad. Sci. U.S.A., vol. 115, no. 44, Oct. 2018, https://www.doi.org/10/gfngpg

[128]:     R. Golden, J. E. Delanois, P. Sanda, and M. Bazhenov, ‘Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation’, PLOS Computational Biology, vol. 18, no. 11, p. e1010628, Nov. 2022, https://www.doi.org/10/gq86tr

[129]:     Wikipedia, ‘Emergence’, Wikipedia. Jul. 09, 2023. https://en.wikipedia.org/w/index.php?title=Emergence&oldid=1164405926.

[130]:     J. Wei et al., ‘Emergent Abilities of Large Language Models’, 2022, https://openreview.net/pdf?id=yzkSU5zdwD.

[131]:     K. Miller, ‘AI’s Ostensible Emergent Abilities Are a Mirage’, Stanford HAI, Aug. 08, 2023. https://hai.stanford.edu/news/ais-ostensible-emergent-abilities-are-mirage

[132]:     R. Schaeffer, B. Miranda, and S. Koyejo, ‘Are Emergent Abilities of Large Language Models a Mirage?’ arXiv, May 22, 2023. http://arxiv.org/abs/2304.15004.

[133]:     D. Ramel, ‘‘Emergent Abilities’: When AI LLMs Learn Stuff They Shouldn’t Know -’, Virtualization Review, 2023. https://virtualizationreview.com/articles/2023/04/21/llm-emergence.aspx

[134]:     S. Bubeck et al., ‘Sparks of Artificial General Intelligence: Early experiments with GPT-4.’ arXiv, Mar. 27, 2023. https://www.doi.org/10.48550/arXiv.2303.12712

[135]:     K. S. Ravishankar, ‘Understanding Artificial General Intelligence: The Next Frontier in AI | LinkedIn’, 2023. https://www.linkedin.com/pulse/understanding-artificial-general-intelligence-next-ai-karthik/

[136]:     S. Ornes, ‘The Unpredictable Abilities Emerging From Large AI Models’, Quanta Magazine, Mar. 16, 2023. https://www.quantamagazine.org/the-unpredictable-abilities-emerging-from-large-ai-models-20230316/

[137]:     B. Thormundsson, ‘Power use in training LLMs 2022’, Statista, Jan. 01, 1970. https://www.statista.com/statistics/1384401/energy-use-when-training-llm-models/

[138]:     P. Walsh, ‘Here Comes the Sun! Why Large Language Models Don’t have to Cost the Earth | LinkedIn’, 2023. https://www.linkedin.com/pulse/here-comes-sun-why-large-language-models-dont-have-cost-paul-walsh/

[139]:     P. Villalobos, J. Sevilla, L. Heim, T. Besiroglu, M. Hobbhahn, and A. Ho, ‘Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning.’ arXiv, Oct. 25, 2022. http://arxiv.org/abs/2211.04325.

[140]:     T. Xu, ‘We could run out of data to train AI language programs’, MIT Technology Review, 2022. https://www.technologyreview.com/2022/11/24/1063684/we-could-run-out-of-data-to-train-ai-language-programs/

[141]:     S. Smith, ‘Will running out of Machine Learning data become a bottleneck for scaling large language AI Models? | LinkedIn’, 2023. https://www.linkedin.com/pulse/our-running-out-machine-learning-data-become-bottleneck-sharon-smith/?trk=pulse-article_more-articles_related-content-card

[142]:     J. Taylor, ‘Google says AI systems should be able to mine publishers’ work unless companies opt out’, The Guardian, Aug. 08, 2023. https://www.theguardian.com/technology/2023/aug/09/google-says-ai-systems-should-be-able-to-mine-publishers-work-unless-companies-opt-out.

[143]:     D. Hassabis, D. Kumaran, C. Summerfield, and M. Botvinick, ‘Neuroscience-Inspired Artificial Intelligence’, Neuron, vol. 95, no. 2, pp. 245–258, Jul. 2017, https://www.doi.org/10/gbp987

[144]:     C. V. Nicholson, ‘A Beginner’s Guide to Neural Networks and Deep Learning’, Pathmind, Apr. 12, 2023. http://wiki.pathmind.com/neural-network

[145]:     Cloudfare, ‘What is a neural network? | Types of neural networks | Cloudflare’, 2024. https://www.cloudflare.com/en-gb/learning/ai/what-is-neural-network/

[146]:     J. Chen, ‘What Is a Neural Network?’, Investopedia, Feb. 07, 2024. https://www.investopedia.com/terms/n/neuralnetwork.asp

[147]:     Wikipedia, ‘Natural language processing’, Wikipedia. Feb. 19, 2024. https://en.wikipedia.org/w/index.php?title=Natural_language_processing&oldid=1208874424.

[148]:     Wikipedia, ‘Alan Turing’, Wikipedia. Feb. 23, 2024. https://en.wikipedia.org/w/index.php?title=Alan_Turing&oldid=1209717076.

[149]:     A. M. Turing, ‘Computing Machinery and Intelligence’, Mind, New Series, vol. 59, no. 236, pp. 433–460, 1950, https://www.doi.org/10/b262dj

[150]:     DeepLearning.ai, ‘Natural Language Processing (NLP) - A Complete Guide’, Jan. 11, 2023. https://www.deeplearning.ai/resources/natural-language-processing/

[151]:     R. Chaturvedi, S. Verma, R. Das, and Y. K. Dwivedi, ‘Social companionship with artificial intelligence: Recent trends and future avenues’, Technological Forecasting and Social Change, vol. 193, p. 122634, Aug. 2023, https://www.doi.org/10/gsfp83

[152]:     D. Vahaba, ‘Could AI-powered Robot ‘Companions’ Combat Human Loneliness? | Duke Today’, Jul. 12, 2023. https://today.duke.edu/2023/07/could-ai-powered-robot-companions-combat-human-loneliness

[153]:     K. Pal, ‘How Do AI Companions Help Combat Loneliness?’, Dec. 20, 2023. https://www.techopedia.com/breaking-the-silence-how-do-ai-companions-help-combat-loneliness

[154]:     E. Strickland, ‘The Brain Implant That Sidesteps The Competition - IEEE Spectrum’, Jan. 24, 2024. https://spectrum.ieee.org/brain-implant-close-to-market

[155]:     E. Ackerman, ‘Year of the Humanoid: Legged robots from eight companies vie for jobs | IEEE Journals & Magazine | IEEE Xplore’, Jan. 24, 2024. https://ieeexplore.ieee.org/document/10384544

[156]:     E. Ackerman, ‘A Robot for Humanity: How Robots Can Empower People Who Need them the Most’, IEEE Spectrum, vol. 60, no. 11, pp. 22–29, Nov. 2023, https://www.doi.org/10/gtgbqk

[157]:     This is Ameca, the most advanced life-like robot in the world!, (2023). [Video]:. https://www.youtube.com/shorts/Q1d5aiBFNpM.

[158]:     The Mornng Show, ‘Meet Ameca! The World’s Most Advanced Robot | This Morning - YouTube’, 2023. https://www.youtube.com/watch?v=vE9tIYGyRE8

[159]:     The AIGrid, ‘Google’s New Robot Just SHOCKED The Entire Industry’, 2024. https://www.youtube.com/watch?v=Ckhf6WfXRI8

[160]:     V. Corless, ‘People prefer voice companions to human-like robots’, Advanced Science News, Apr. 20, 2022. https://www.advancedsciencenews.com/people-prefer-voice-companions-to-human-like-robots/

[161]:     B. Abendschein, A. Edwards, and C. Edwards, ‘Novelty Experience in Prolonged Interaction: A Qualitative Study of Socially-Isolated College Students’ In-Home Use of a Robot Companion Animal’, Frontiers in Robotics and AI, vol. 9, 2022, https://www.doi.org/10/gtjm4d

[162]:     H. L. Bradwell, K. J. Edwards, R. Winnington, S. Thill, and R. B. Jones, ‘Companion robots for older people: importance of user-centred design demonstrated through observations and focus groups comparing preferences of older people and roboticists in South West England’, BMJ Open, vol. 9, no. 9, p. e032468, Sep. 2019, https://www.doi.org/10/gtjm4c

[163]:     K. Washington, ‘Robots With a Human Touch? Yes, Please | WIRED UK’, Jan. 16, 2023. https://www.wired.co.uk/article/robots-bodies-design

[164]:     G. Roy, ‘Revolutionizing Agriculture: The Role of Robotics in Boosting Productivity and Sustainability’, Nov. 18, 2023. https://www.securities.io/revolutionizing-agriculture-the-role-of-robotics-in-boosting-productivity-and-sustainability/

[165]:     M. Wakchaure, B. K. Patle, and A. K. Mahindrakar, ‘Application of AI techniques and robotics in agriculture: A review’, Artificial Intelligence in the Life Sciences, vol. 3, p. 100057, Dec. 2023, https://www.doi.org/10/gtjsff

[166]:     A. Mushtaque, ‘Top 20 AI and Robotics Companies Transforming the Agriculture Sector’, Yahoo Finance, Sep. 11, 2023. https://finance.yahoo.com/news/top-20-ai-robotics-companies-100837458.html

[167]:     O. Ali, W. Abdelbaki, A. Shrestha, E. Elbasi, M. A. A. Alryalat, and Y. K. Dwivedi, ‘A systematic literature review of artificial intelligence in the healthcare sector: Benefits, challenges, methodologies, and functionalities’, Journal of Innovation & Knowledge, vol. 8, no. 1, p. 100333, Jan. 2023, https://www.doi.org/10/gtfpqf

[168]:     O. V. Bitkina, J. Park, and H. K. Kim, ‘Application of artificial intelligence in medical technologies: A systematic review of main trends’, DIGITAL HEALTH, vol. 9, p. 20552076231189331, Jan. 2023, https://www.doi.org/10/gtjsft

[169]:     A. Amin et al., ‘Future of Artificial Intelligence in Surgery: A Narrative Review’, Cureus, vol. 16, no. 1, p. e51631, https://www.doi.org/10/gtjsfv

[170]:     New England Journal of Medicine, ‘AI in Medicine — NEJM’, New England Journal of Medicine, 2024. https://www.nejm.org/ai-in-medicine

[171]:     J. McCartney, ‘AI Is Poised to ‘Revolutionize’ Surgery’, ACS, Jun. 07, 2023. https://www.facs.org/for-medical-professionals/news-publications/news-and-articles/bulletin/2023/june-2023-volume-108-issue-6/ai-is-poised-to-revolutionize-surgery/

[172]:     B. M. Meskó PhD, ‘What Do We Need To Have AI-Equipped Nanobots In Medicine | LinkedIn’, Dec. 08, 2023. https://www.linkedin.com/pulse/what-do-we-need-have-ai-equipped-nanobots-medicine-mesk%C3%B3-md-phd-gomie/

[173]:     N. Sahota, ‘AI Nanobots in Healthcare: From Sci-Fi Concept to Clinical Promise’, Neil Sahota, Dec. 19, 2023. https://www.neilsahota.com/ai-nanobots-in-healthcare-from-sci-fi-concept-to-clinical-promise/

[174]:     A. Banafa, ‘AI and Drones 🤖’, OpenMind, Dec. 11, 2023. https://www.bbvaopenmind.com/en/technology/artificial-intelligence/ai-and-drones/

[175]:     M. Makaryan, ‘AI Is Transforming Military Drones’, Inkstick, Dec. 21, 2023. https://inkstickmedia.com/ai-is-transforming-military-drones/

[176]:     C. Walter, ‘Drone AI Technology: How It Works; Why It Matters’, Consortiq, Aug. 14, 2022. https://consortiq.com/uas-resources/drone-ai-technology-how-it-works-why-it-matters

[177]:     S. Halpern, ‘A.I. and the Next Generation of Drone Warfare | The New Yorker’, Sep. 15, 2023. https://www.newyorker.com/news/news-desk/ai-and-the-next-generation-of-drone-warfare

[178]:     P. P. O’Brien, ‘The Real AI Weapons Are Drones, Not Nukes’, The Atlantic, Feb. 01, 2024. https://www.theatlantic.com/ideas/archive/2024/02/artificial-intelligence-war-autonomous-weapons/677306/

[179]:     Prof. A. Banafa, ‘Nuclear AI: Pioneering the Future of Nuclear Technology | LinkedIn’, Aug. 09, 2023. https://www.linkedin.com/pulse/nuclear-ai-pioneering-future-technology-prof-ahmed-banafa/

[180]:     IAEA, ‘Seven Ways AI Will Change Nuclear Science and Technology’, Sep. 22, 2022. https://www.iaea.org/newscenter/news/seven-ways-ai-will-change-nuclear-science-and-technology

[181]:     Nuclear Engineering International, ‘Using AI in nuclear decommissioning - Nuclear Engineering International’, Feb. 09, 2023. https://www.neimagazine.com/features/featureusing-ai-in-nuclear-decommissioning-10580423/

[182]:     WNN, ‘Sellafield AI strategy ’to boost safety and speed up site remediation’ : Waste & Recycling - World Nuclear News’, Mar. 09, 2023. https://world-nuclear-news.org/Articles/Sellafield-unveils-AI-strategy-to-accelerate-clean

[183]:     Open AI, ‘GPT-4 System Card.’ Mar. 27, 2023. https://cdn.openai.com/papers/gpt-4-system-card.pdf.

[184]:     K. Crawford, Atlas of AI: power, politics, and the planetary costs of artificial intelligence. New Haven London: Yale University Press, 2021.

[185]:     M. O’Gieblyn, God, human, animal, machine: technology, metaphor, and the search for meaning, First edition. New York: Doubleday, 2021.