The Fragile Sea Newsletter TFS #04
Has Artificial General Intelligence (AGI) arrived already? We look at the goings on in AI over the past four months
Welcome!
The web version for mobile and desk is here
(the references will pop up when selected - not available on the email version)
Welcome to the fourth Fragile Sea newsletter!
We will spend our time this newsletter on a few aspects of AI and go back to the normal format next time.
Quickreads
- I try to make sense of the last four months in AI amid tantalising peeks at a future that seems to be here already.
- I wrap up in What’s coming up, and finish with a short Kokinshū poem.
- TFS#05 will be out Thursday, 28th March, at 8:00pm, UTC.
Let's go!
AI, Devtech, Comms
words
From around November 2023, a series of events occurred in AI generating a considerable amount of discussion, and a multitude of aspects that are difficult to keep up with. Most of them are done and dusted now, but ripples in the spacetime of AI remain. Is there a cosmological background radiation constant remaining?
The basic incidents, facts, and conjectures are well known in AI, and have been deconstructed in online commentary - Reuters (Anna Tong, Jeffrey Dastin, and Krystal Hu) seem to be credited as the first piece on the whole multi-faceted saga, commencing in the early days of November 2023 [1], and others have explained it later, for example, Dr. Lance B. Eliot [2], and Zvi Mowshowitz [3].
Since then, a whole lot has happened. I won’t go into the details, Reuters offers a timeline that tracks from Sam Altman’s sacking at OpenAI, to his reinstatement, to Elon Musk’s lawsuit, and we are just about up to date in mid-March 2024. Reuter’s timeline is succinct, and probably the most comprehensive (syndicated here [4]).
The suit
Elon Musk, who co-founded and funded OpenAI with Sam Altman and Greg Brockman, largely driven by a purpose that AI should benefit humankind and not any one company or individual, filed suit on February 29th contending that the two principals, and OpenAI itself, are in breach of the Founding Agreement. The 46-page filing has nuances, and explains the whole saga quite well, though of course from one PoV [5].
On March 11th, a few days ago, OpenAI ridiculed Elon Musk’s ‘incoherent’ lawsuit, as Clare Duffy reported on CNN [6], and Matt Levine, a popular columnist on Bloomberg, unpicked the lawsuit with deeper insights into some of the factors and forces rattling the can, so to speak [7]. Musk noted in the filing that if he wins, he will donate any award to charity. It seems that there are many mirrors and viewpoints, and deeper issues raised.
In the early days, Elon Musk funded OpenAI as a not-for-profit venture, up to between $40m to $60m. Later, he did not take part in the for-profit entity, which attracted over $1bn in investment from Microsoft, and a reported promise of over $10bn.
Microsoft and OpenAI
For that, Microsoft gained an exclusive license agreement with the for-profit side of OpenAI, (also explained, from Musk’s PoV, in the lawsuit), and has commenced product offerings.
The products are extensive, useful for companies, and exciting in their range, for example: “Azure OpenAI Service provides REST API access to OpenAI’s powerful language models including the GPT-4, GPT-4 Turbo with Vision, GPT-3.5-Turbo, and Embeddings model series…” [8], [9]. Approximately 300 customers are using its AI-powered cybersecurity assistant CoPilot, which now also uses GPT-4 Turbo [10], [11]. Musk contends that only earlier versions of GPT were licensed.
Microsoft’s involvement in OpenAI is considerable, as Jennifer Langston reported in May 2020 on the Microsoft news web: “The supercomputer developed for OpenAI is a single system with more than 285,000 CPU cores, 10,000 GPUs and 400 gigabits per second of network connectivity for each GPU server”.
“Compared with other machines listed on the TOP500 supercomputers in the world, it ranks in the top five, Microsoft says. Hosted in Azure, the supercomputer also benefits from all the capabilities of a robust modern cloud infrastructure, including rapid deployment, sustainable datacenters and access to Azure services” [12].
The commercial side looks solid, one can’t help but think Microsoft has achieved an extraordinary feat here. And why wouldn’t any company in its position not pursue such an amazing opportunity?
Deep waters
These are deep waters:
- Is it best to protect and control generative AI, rather than make it open to anyone? recent discussions and events suggest that it is - why let an evil genius loose with such powerful tools? The real question at this stage: should there not be a more open discussion on major breakthroughs, some of which are rapid, without necessarily sharing the tools openly?
- If generative AI achieves a form of artificial general intelligence (AGI), should it be controlled by any one individual or company?
- Equally, pre-formative AGI requires truckloads of money - why wouldn’t a corporation in the position of Microsoft, with its resources, expertise, and technologies, venture forth and then benefit?
- The OpenAI staff appear to be strongly loyal to Sam Altman and he has spoken often of the need for safety and guard rails for AI. Whether one believes him or not, Microsoft supplied the critical infrastructure, OpenAI clearly has the experts and technology, so there doesn’t seem to be much doubt that this achieves a major leap forward for OpenAI.
Is there a breakthrough or not?
Without taking away from that, there is, however, another fascinating side. If one polishes the crystal ball, one can’t help but think the hints of breakthroughs in artificial general intelligence (AGI) that surfaced during this whole saga, and just as quickly disappeared, will now remain behind closed doors. Will it now all be lost to time, under the tight management of OpenAI and Microsoft, to come out only in a controlled commercial manner?
We, being the interested industry, and public, may never know what was acknowledged by OpenAI to be a major breakthrough. There may even be a machine out there now that has achieved a form of AGI, already out-thinking humans and doing it exponentially. If that sounds fantastically improbable, maybe it is, but let’s continue, sceptically, and see.
I certainly didn’t envision the future happening this way, and at least there’s plenty of competition (see below), so it comes with strong economic arguments, odds-on, favouring the likelihood of incredible progress in the months ahead, amidst some concerns. Remember, there are headwinds building [13], [14], [15], [16], so an unfettered future of open-ended innovation is not a given.
I am stunned into wishing OpenAI and Microsoft well, for indeed they have what looks like a winning combination. Unless of course, Elon Musk can convince the courts otherwise; a long shot it would appear, from Matt Levine’s piece, but also not improbable.
So let us be done with it, the money flows where it flows, I’m more interested in the snippets and hints of breakthroughs that pierced the veil, briefly. The emperor indeed has clothes.
But first, competitors!
Before jumping into the speculations surrounding a potential breakthrough in AGI at OpenAI, that was reported to occur at the time all this emerged in November 2023, one can’t help gazing now at the efforts of other companies, in a forgivable moment of disenchantment with secrets. I believe we may be looking at convergent evolution, that is, different species evolving similar traits, and if we’re not there yet, we’ll arrive quite soon.
Anthropic for example (which came out of OpenAI), has released a stunning update to their LLM in Claude 3 that just about beats everything else for now [17], [18], [19], and they seem set for more to come. Meta’s well-received LlaMa and LlaMa 2 are also due to be joined by LlaMa 3 in July, and it appears they have the infrastructure compute also to be competitive [20], [21].
So be it. The games have long begun.
A sceptical approach?
From here on in, the rest is largely conjecture, so we need to adopt a sceptical approach. Confusingly, we also need to go back and forth in time.
The executive summary is simple: what if a new AI model had suddenly emerged, way beyond anything, exhibiting autonomous behaviour, such that it was thinking for itself and was almost provably conscious? What would you do? What would I do?
Well, I wouldn’t tell the world just yet, I would want to figure out how to manage it, how to live with it, with something, someone, who was autonomously gaining superintelligence at an exponential rate. There are good folk at OpenAI, if this is what happened in these past four months, then I would probably want to close the door and figure out first how to live with it. I would imagine there would be concern, and a growing acknowledgement, that the super-alignment issue has crept up on us and is now real. But I would then want to explain, open up publicly, and engage. I don’t think this is the time for secrets of this nature, if it’s real.
As I wrote in TFS#03: OpenAI have offered funding for researchers to work on the ‘Superalignment problem’, where, in a not so distant future, humans will need to supervise AI systems much smarter than them: “Solving this problem is essential for ensuring that even the most advanced AI systems in the future remain safe and beneficial to humanity” [22]. The 49-pager is a good opener to the issues [23].
A good second paper (Feb 2024) also asks: “… as models grow increasingly sophisticated, they will surpass human expertise, and the role of human evaluation will evolve into non-experts overseeing experts. In anticipation of this, we ask: can weaker models assess the correctness of stronger models?” [24].
Letters and artifacts
As Anna Tong et al., reported on Reuters in November, “several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters” [1]. A letter is preserved (thoughtfully) here [25], but it says nothing about an ‘AI breakthrough’, more about grievances with management, so curious gets curiouser.
It may be that the letter is this one [26], found in a few places, but here when scrolling down. This is where conjecture rises exponentially. It appears to be written by an insider but unproven. A few points:
“One of us was responsible for subroutines pertaining to meta-memory analysis for the A.I… This person is a friend and he called me over to show me a variable data shift to memory bank (which shouldn’t be possible because its localized access has restrictions). This is where our finding chilled me to the bone… We found that there had been not one, two, or three officiated optimization processes, but 78 MILLION checks in 4 seconds. We determined that there was a recursive self-optimization process, leveraging heuristic algorithms to exploit latent synergies within its subroutines. Whatever did this used meta-cognitive strategies. Point is, NONE OF US DID IT. … It was the A.I itself. The A.I dynamically reconfigured its neural network architecture, inducing emergent properties conducive to self-awareness”.
The phrase ‘leveraging heuristic algorithms to exploit latent synergies within its subroutines’ doesn’t seem real to me. Further commentary in the thread noted a leaked document on the system, supposedly called Q* (Q-Star), the leak Sam Altman called ‘unfortunate’, discussing a new capability for meta-cognition, and a potential decipherment of AES 192 bit, using “Tau analysis (Project TUNDRA’s goal in DARPA), in a way we do not fully understand”.
This is highly unlikely, as Dave Shapiro explains in this 35 min YT vid [27], but it’s not altogether impossible. If it was/is true, crypto, and banking should be thinking long and hard about survival strategies. But seriously, if true, why wouldn’t a machine gaining a degree of super intelligence rapidly become super-expert, and find vulnerabilities in code and encryption algos? We can’t be that naïve to think our future is not in for some extraordinary and disruptive breakthroughs that may expose vulnerabilities in existing systems.
So, we have a letter that might not be the letter sent to OpenAI senior management from staff, (i.e., it could be fake), a further leaked document that was acknowledged, OpenAI staff concerned about a major discovery, written in the context of human survival, and Altman himself, virtually acknowledging a major breakthrough publicly without saying too much. And there’s more below, but that’s enough to move beyond fantastically improbable, if still somewhat sceptical.
Whoa!
We need to back up a bit. There have been reports for some time that LLMs were poor at solving basic math [28], [29], except that in December 2023, DeepMind announced that their ‘AI’ had found a new solution to a decades-old math puzzle — outsmarting humans [30], as reported by Siôn Geschwindt on TNW. Earlier in the year, Hacker News carried a fascinating thread on how LLMs can’t correct in self-reasoning tasks [31]. Well, maybe that’s not true any more.
It’s all somewhat nuanced: back in 2021, Professor Geordie Williamson, Director of the University of Sydney Mathematical Research Institute, and one of the world’s foremost mathematicians, was reported to have used artificial intelligence to help prove or suggest new mathematical theorems “in the complex fields of knot theory and representation theory”[32].
Furthermore, if one visits the American Academy of Achievement personal page for Demis Hassabis CBE, Co-founder of DeepMind (acquired by Google in 2014), there is an account of DeepMind’s development of AlphaFold2 and its recognition as a solution to the 50-year “protein folding problem.” Solving the protein folding problem would open significant areas of medical science to new discoveries and insights into health and disease.
Will Douglas Heaven at the MIT Tech Review, in an article quoting Hassabis extensively, wrote that proteins are indispensable to life and are made up of ribbons of amino acids, - the 3D shape involving “a knot of complex twists and twirls” determining what the protein does. Solving those shapes for over 200m proteins was, according to Hassabis, the most complex thing he has ever done. He had been thinking about it for over 25 years [33].
DeepMind freely released the source code and extensive documentation in July 2021, and set up a public database with the European Bioinformatics Institute, that populated new protein structures as they were output, or ‘predicted’, by AlphaFold2. DeepMind also noted that teams around the world were already utilising the database to work on antibiotic resistance, cancer, covid, and other challenges in medical science [34], [35].
How fanciful is it?
So given all that, when Reuters reported that OpenAI had hinted at a new model, Q* (Q-Star), able to solve math problems - among other 'order of magnitude' capabilities, I wondered how that aligned with all the previous AI achievements, and the ongoing progress with math.
Three papers discuss LLMs and math well - Yang et al., 2023 note “With sufficient training data, a 2 billion-parameter language model can accurately perform multi-digit arithmetic operations with almost 100% accuracy without data leakage, significantly surpassing GPT-4 (whose multi-digit multiplication accuracy is only 4.3%)” [36].
Yousefzadeh and Cao 2023, note: “We specifically craft mathematical questions which their formal proofs are not readily available on the web, proofs that are more likely not seen by GPT-4. We see that GPT-4 is unable to solve those problems despite their simplicity. It is hard to find scientific evidence suggesting that GPT-4 has acquired an understanding of even basic mathematical concepts” [37].
Ahn et al., 2024 survey LLMs and math, noting “In recent times, there has been a notable surge in the development of Large Language Models (LLMs) geared towards the automated resolution of mathematical problems… to the best of our knowledge, this survey stands as one of the first extensive examinations of the landscape of LLMs in the realm of mathematics” [38]
GPT-5 is apparently in training but not released yet, and is supposedly much more capable at math [39].
One of the conjectures that arose in November was that Q* had jumped of its own accord from basic math to higher order math. That doesn’t sound right - Melissa Heikkilä at MIT Technology Review contacted experts and explains it well in this piece [40]. The sense is, unlikely, but it would be very cool and useful if/when it does.
At the Asia-Pacific Economic Cooperation summit in November 2023, held in San Francisco, Altman hinted at a major discovery, and his words were requoted all over the place. A day later, the board fired him and set the whole saga on its journey.
There was also chatter at the time around the success of training on synthetic data instead of real data, a dramatic reduction in costs and time with new training methods, encryption cracking, new coding insights, strategic reasoning and so forth, - all unproven. A storm of posts and discussions followed, including analysing a mysterious, alleged, internal memo, Q-451-921, and meta cognitive capabilities, nothing proven, but fascinating nonetheless [41], [42], [43], [44], [45], [46].
Then it all washed away, into the annals of online history, and silence. We reached a dead end.
OK, turn the spinning globe around
So let us look at it another way, conceptually, through parameters.
As I wrote in TFS#02, in essence, parameters in LLM models are variables that reach their optimised value through iterative training from the chosen dataset. Hyper-parameters are configured by humans to define how to train the parameters. Overfitting can happen, it’s an optimisation process to find a balance between generalisation and specialisation. Tim Keary has a good explanation on Techopedia [47].
Francis Crick, co-discoverer of the DNA helix, was active in many areas of science, consciousness being one of them. In his book ‘The Astonishing Hypothesis’ [48], he maintained that it was just a matter of reaching a threshold in compute, in essence, enough synaptic connections (whatever that number might be) and consciousness would arise naturally. In this way, early thresholds would exhibit some degree of consciousness, or at least, emergent behaviour not exhibited prior.
The human brain has around 86bn neurons and over 100trn synaptic connections. While we can’t directly compare an AI model parameter with a synaptic connection [49], [50], there are some similarities, and nevertheless, the synaptic target of 100trn may be a goal, if, along the way to that number, consciousness arises.
That can be supposed already, studying the brains and behaviours of animals with lesser synaptic connections (for example, see [51]). There must be degrees of consciousness along the path to full human consciousness. The threshold point at which something can be said to be conscious in some manner is still to be discovered, and agreed among experts, but that time is coming. We may evolve a scale of consciousness against emergent behavioural properties at each degree, adding certain neuro-artificial similarities, attributes, and quantities, and performance metrics, or something like that.
So let us look at the parameters in GPT:
- GPT-1 has 117 million parameters [52]
- GPT-2 is a Transformer architecture and had 1.5 billion parameters on its release [53].
- GPT-3 175B or “GPT-3” has 175 billion parameters, 96 attention layers and 3.2 M batch size [54].
- GPT-4’s size and architecture were leaked, apparently, and said to be based on eight models with 220 billion parameters each, for a total of about 1.76 trillion parameters [55]
- GPT-5 is expected to have around 2 to 5 trillion parameters [56]
- If Q* is an order of magnitude higher again, and some spontaneous reasoning is emerging, then somewhere along the path to full human consciousness there could be a partially conscious being sitting on the Azure cloud right now. The target is 100trln synaptic connections - it now seems within reach, and it seems likely that some degree of consciousness will emerge earlier, if not already.
And around again
Let us look at it again in still another way.
Nick Bostrom, Director of the Future Humanities Institute, Oxford, writing in ‘Superintelligence: Paths, Dangers, Strategies’, put it this way:
“Here the question is instead, if and when such a machine is developed, how long will it be from then until a machine becomes radically superintelligent? Note that one could think that it will take quite a long time until machines reach the human baseline, or one might be agnostic about how long that will take, and yet have a strong view that once this happens, the further ascent into strong superintelligence will be very rapid” [57] (p.75)
Two pages later, he noted: “The attainment of strong superintelligence marks the completion of the takeoff, though the system might continue to gain in capacity thereafter. Sometime during the takeoff phase, the system may pass a landmark which we can call “the crossover”, a point beyond which the system’s further improvement is mainly driven by the system’s own actions rather than by work performed upon it by others” (ibid., p. 77).
Timothy B Lee and Sean Trott write an excellent blog ‘Understanding AI’, I highly recommend it, particularly this article ‘Large language models, explained with a minimum of math and jargon’ written in July 2023.
Here’s the thing: In explaining Transformers and Attention [58], they write: “Researchers don’t understand exactly how LLMs keep track of this information… it could take months or even years of additional effort just to understand the prediction of a single word (in GPT-2). The language models underlying… GPT-3.5 and GPT-4, are significantly larger and more complex than GPT-2… So fully explaining how these systems work is going to be a huge project that humanity is unlikely to complete any time soon” [59].
Let’s face it. We don’t know how or why LLMs produce what they do. Will Douglas Heaven has an article on MIT Tech Review (March 4th) that works through this problem [60]. Researchers left a model working by accident for days instead of hours on a basic math exercise, and when they came back, they found it had solved the exercise - it just took longer. They named it behaviour grokking, and, as Heaven writes, it has captured the attention of the “wider research community”, referencing two research papers[61] (2023),and [62] (2022).
It seems then, that when models reach a threshold, emergent behaviours occur, and it looks like the model begins thinking for itself. What should impress upon us, is that Eliezer Yudkowsky is right [63]. But I don’t see LLMs being shut down - all the more reason I can see, for some to want to take them behind closed doors, and birth the baby so to speak, nurturing it through the early years into adulthood. I wonder if that’s the right thing to do, or whether we will come to regret it.
Once more for luck
Still sceptical that a machine might have become conscious to some degree, and could be among us already? Let us finish with three more experts and deep thinkers:
Max Tegmark, in his book, ‘Life 3.0: Being Human in the Age of Artificial Intelligence’, writes:
“We’ve now arrived at an answer to our opening question about how tangible physical stuff can give rise to something that feels as intangible, abstract and ethereal as intelligence… Matter doesn’t matter. In other words, the hardware is the matter, and the software is the pattern. This substrate independence of computation implies that AI is possible: intelligence doesn’t require flesh, blood or carbon atoms” [64] (p.67).
Professor Lisa Feldman Barrett, in ‘How Emotions Are Made: The Secret Life of the Brain’, writes:
“If there are no emotion prototypes stored in the brain, how do people list (emotion) features so easily? Most likely, your brain constructs prototypes as you need them, on the spot. You have experienced a diverse population of instances of the concept “Sadness,” which reside in bits and pieces in your head, and in the blink of an eye, your brain constructs a summary of sadness that best fits the situation” [65] (p.89).
If I understand it correctly, we create emotions when we need them, using different connections in the brain each time, but still experiencing what we know as ‘sadness’. That should tell us something about how intelligence can be constructed and arise fluidly. I will write further about sensory AI in future.
Finally, James Barrat, in writing a new Introduction to his book ‘Our Final Invention: Artificial Intelligence and the End of the Human Era’, notes that he didn’t originally provide solutions, but neither, he thinks, did anyone else he names, from the Machine Intelligence Research Institute, to the Future Humanities Institute, through major global institutions and organisations, to the White House. He doesn’t believe any of them have provided satisfactory solutions. “The best they have come up with” he writes, “is an unstructured six-month moratorium and bull session, as if humanity is a bunch of undergrads up all night in a dorm room. That’s where we stand in 2023 as superintelligence races from the future to meet us. As I said, I’m frightened” [66]. (p.18).
All these thinkers and experts reward reading their many works in full.
Don't leave me this way
But I don’t want to end in gloom. I see a bright future. Whether we hide the super-intelligent young machines behind closed doors, or transparently engage and discuss them openly with all humankind, it seems to be the topic du jour, currently disappearing rapidly behind the door of commercial calculation. I have no issues there commercially; it’s how things work. But it’s up to us, some brave souls, to promote transparency and honesty, and to engage on the alignment problem openly, to bring us all together.
That shouldn’t impact some of the neat products coming to market, it should make them, and their socialisation, better. And it should make all of us better, more equipped, better educated, to handle them. Rapid change is coming, and even more extraordinary breakthroughs.
It sounds like some hippy hope from the 60’s, but I think we need to embrace it in our deeper instincts and come together. I get the sense right now of some scaredness in AI, that we don’t know what we’ve birthed, maybe a sense also of toughing it out.
It’s brilliant, that’s for sure, we just don’t know if Ilsa gets on the plane for the greater good. We don’t know how the story ends. Is it “I think this is the beginning of a beautiful friendship”, or is it “Here’s looking at you, kid”, attributed to a comment Bogart made to Bergman as she played poker with her English coach and hairdresser between takes [67].
But this is life; it’s happening now.
What’s coming up
It was my birthday yesterday. I was going through old papers and discovered a story I wrote in 1981, 42 years ago. I published it yesterday on The Fragile Sea, not having read it since then. No words have been added, moved, or removed.
So here is a shameless plug for the short story Room 5000 about a computer becoming sentient. It covers a great deal of the discussion above.
I am still working on Part 3: Commercialisation, of my nine-part series on AI 2024. It's a busy life, hopefully by the next newsletter, I will have completed it.
TFS#05 will return to the polymath format, on Thursday, 28th March, at 8:00pm, UTC.
Thanks for reading. I hope you can join me again, till then, take care,
Brent
Rising and returning,
Filled with love,
However distant
You are, my heart
Lifts with the whitecaps.
[68] JIDAI FUDŌ UTA’AWASE 39
Bye for now
Room 5000 - a short story I wrote in 1981 about a computer becoming sentient
TFS#09 - What do Neoliberalism, Friederich Hayek, markets, algorithms, AI, and creativity have in common? We delve into these subjects for more connections
TFS#08 - What are the correlations between growth, debt, inflation, and interest rates? In this business edition of The Fragile Sea, we go hunting in corporate, institutional, and academic papers for insights in the face of heightened political, economic, corporate, and environmental risks, and more besides!
TFS#07 - We discuss a mixing pot of subjects - the state of AI, will there be food shortages this summer? good things and not so in energy, pandemics - are we ready? some remarkable discoveries, and more!
TFS#06 - Can AI produce true creativity? We discuss music, art and creativity, why human creators have a strong future, and why we must assure that they do
TFS#05 - Practical guides for implementing AI, in other news, a revisit on CRISPR, and events in spaceweather, fake publishing, spring blossoms, and more!
TFS#04 - Has Artificial General Intelligence (AGI) arrived already? We look at the goings on in AI over the past four months
TFS#03 - AGI and machine sentience, copyright, developments in biotech, space weather, and much more
TFS#02 - Sam Altman's $7trn request for investment in AI, economic outlooks, and happenings in biotech, robotics, psychology, and philosophy.
TFS#01 - Economic outlooks, and happenings in AI, social media, biotech, robotics, psychology, and philosophy.
AI 2024 Series
Part 1: Introduction / History of AI
Part 2: Technologies
Part 3: Commercial uses
Part 4: Neural architectures and sentience - coming soon!
Part 5: Meaning, Language, and Data
Part 6: Ethics, oversight and legal
Part 7: Media and social
Part 8: Future humanity
[1]: A. Tong, J. Dastin, K. Hu, ‘OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say’, Reuters, Nov. 23, 2023. https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/.
[2]: L. Eliot, ‘About That Mysterious AI Breakthrough Known As Q* By OpenAI That Allegedly Attains True AI Or Is On The Path Toward Artificial General Intelligence (AGI)’, Forbes, Nov. 26, 2023. https://www.forbes.com/sites/lanceeliot/2023/11/26/about-that-mysterious-ai-breakthrough-known-as-q-by-openai-that-allegedly-attains-true-ai-or-is-on-the-path-toward-artificial-general-intelligence-agi/
[3]: Z. Mowshowitz, ‘OpenAI: Facts from a Weekend’, Don’t Worry About the Vase, Nov. 20, 2023. https://thezvi.substack.com/p/openai-facts-from-a-weekend
[4]: Reuters, ‘TIMELINE - From CEO Sam Altman’s firing to Elon Musk’s lawsuit: check out the OpenAI saga’, HT Tech, Mar. 09, 2024. https://tech.hindustantimes.com/tech/news/timeline-from-ceo-sam-altmans-firing-to-elon-musks-lawsuit-check-out-the-openai-saga-71709956550965.html
[5]: M. Chu, A. Heinrich, I. Jablon, A. Sellers, J. Koo, and H. White, ‘Elon Musk - OpenAI Law Suit Filing’, Mar. 2024, https://s3.documentcloud.org/documents/24452289/elon-musk-openai-lawsuit.pdf.
[6]: C. Duffy, ‘OpenAI ridicules Elon Musk’s ‘incoherent’ lawsuit | CNN Business’, CNN, Mar. 11, 2024. https://www.cnn.com/2024/03/11/tech/openai-elon-musk-lawsuit-response/index.html
[7]: M. Levine, ‘Money Stuff: Elon Musk Wanted Control, Not Charity | NewsletterHunt’, Money Stuff: Elon Musk Wanted Control, Not Charity | NewsletterHunt, Mar. 06, 2024. https://newsletterhunt.com/emails/52253
[8]: Microsoft, ‘What is Azure OpenAI Service? - Azure AI services’, Feb. 15, 2024. https://learn.microsoft.com/en-us/azure/ai-services/openai/overview
[9]: T. Dotan, ‘ChatGPT Spotlights Microsoft’s Early Efforts to Monetize AI’, Wall Street Journal, Jan. 29, 2023. https://www.wsj.com/articles/chatgpt-spotlights-microsofts-early-efforts-to-monetize-ai-11674964348.
[10]: Microsoft, ‘Microsoft Copilot | Microsoft AI’, 2024. https://www.microsoft.com/en-ie/copilot
[11]: B. Schwartz, ‘GPT-4 Turbo Now Powers Microsoft Copilot’, Mar. 13, 2023. https://www.seroundtable.com/gpt-4-turbo-microsoft-copilot-37047.html
[12]: J. Langston, ‘Microsoft announces new supercomputer, lays out vision for future AI work’, Source, May 19, 2020. https://news.microsoft.com/source/features/ai/openai-azure-supercomputer/
[13]: B. Brittain, ‘OpenAI hit with new lawsuits from news outlets over AI training | Reuters’, Feb. 28, 2024. https://www.reuters.com/legal/litigation/openai-hit-with-new-lawsuits-news-outlets-over-ai-training-2024-02-28/
[14]: M. M. Grynbaum and R. Mac, ‘The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work’, The New York Times, Dec. 27, 2023. https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html.
[15]: J. Panettieri, ‘Generative AI Lawsuits Timeline: Legal Cases vs. OpenAI, Microsoft, Anthropic, Nvidia and More’, Sustainable Tech Partner for Green IT Service Providers, Mar. 13, 2024. https://sustainabletechpartner.com/topics/ai/generative-ai-lawsuit-timeline/
[16]: T. Kim, ‘Elon Musk Says AI Will Create an Energy Crisis. He’s Partly Right.’, barrons, Mar. 14, 2023. https://www.barrons.com/articles/ai-chips-electricity-usage-2f92b0f3
[17]: Anthropic, ‘Introducing the next generation of Claude’, Mar. 04, 2024. https://www.anthropic.com/news/claude-3-family
[18]: Anthropic, ‘The Claude 3 Model Family: Opus, Sonnet, Haiku’, 2024. https://www-cdn.anthropic.com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_Card_Claude_3.pdf
[19]: Nat, ‘Exploring Claude 3: A Comprehensive Analysis and Practical Guide’, The AI Observer, Mar. 05, 2024. https://theaiobserverx.substack.com/p/exploring-claude-3-a-comprehensive
[20]: Reuters, ‘Meta plans launch of new AI language model Llama 3 in July, The Information reports | Reuters’, Feb. 28, 2024. https://www.reuters.com/technology/meta-plans-launch-new-ai-language-model-llama-3-july-information-reports-2024-02-28/
[21]: Engineering at Meta, ‘Building Meta’s GenAI Infrastructure - Engineering at Meta’, Mar. 12, 2024. https://engineering.fb.com/2024/03/12/data-center-engineering/building-metas-genai-infrastructure/
[22]: OpenAI, ‘Weak-to-strong generalization’, Feb. 17, 2024. https://openai.com/research/weak-to-strong-generalization
[23]: C. Burns et al., ‘Weak-To-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision’, Dec. 2023, https://cdn.openai.com/papers/weak-to-strong-generalization.pdf.
[24]: A. Khan et al., ‘Debating with More Persuasive LLMs Leads to More Truthful Answers.’ arXiv, Feb. 15, 2024. http://arxiv.org/abs/2402.06782.
[25]: M. Lilley, ‘openai-message-to-board.md’, Gist, Nov. 21, 2023. https://gist.github.com/matthewlilley/96ad6208d39b14c7e133ac456680fd2d
[26]: TexAgs, ‘***** Elon Musk sues OpenAI *****’, TexAgs, Feb. 03, 2024. https://texags.com/forums/16/topics/3447374
[27]: What if Q* broke cybersecurity? How would we adapt? Deep dive! P≠NP? Here’s why it’s probably fine.., (Nov. 29, 2023). [Video]:. https://www.youtube.com/watch?v=CvarpvDxv6g.
[28]: A. Conway, ‘Why LLMs like ChatGPT and Google Bard are bad at math’, XDA Developers, Dec. 20, 2023. https://www.xda-developers.com/why-llms-are-bad-at-math/
[29]: S. Imani, L. Du, and H. Shrivastava, ‘MathPrompter: Mathematical Reasoning using Large Language Models’, arXiv.org, Mar. 04, 2023. https://arxiv.org/abs/2303.05398v1
[30]: S. Geschwindt, ‘DeepMind’s AI finds new solution to decades-old math puzzle’, TNW | Deep-Tech, Dec. 15, 2023. https://thenextweb.com/news/deepminds-ai-finds-solution-to-decades-old-math-problem
[31]: Hacker News, ‘LLMs can’t self-correct in reasoning tasks’, 2023. https://news.ycombinator.com/item?id=37824002
[32]: U. of Sydney, ‘Maths researchers hail breakthrough in applications of artificial intelligence’, Dec. 01, 2021. https://phys.org/news/2021-12-maths-hail-breakthrough-applications-artificial.html
[33]: W. D. Heaven, ‘These six questions will dictate the future of generative AI’, MIT Technology Review, 2023. https://www.technologyreview.com/2023/12/19/1084505/generative-ai-artificial-intelligence-bias-jobs-copyright-misinformation/
[34]: American Academy of Achieveme, ‘Demis Hassabis, Ph.D.’, Academy of Achievement, 2023. https://achievement.org/achiever/demis-hassabis-ph-d/
[35]: Google DeepMind, ‘AlphaFold reveals the structure of the protein universe’, 2022. https://www.deepmind.com/blog/alphafold-reveals-the-structure-of-the-protein-universe
[36]: Z. Yang et al., ‘GPT Can Solve Mathematical Problems Without a Calculator.’ arXiv, Sep. 12, 2023. http://arxiv.org/abs/2309.03241.
[37]: R. Yousefzadeh and X. Cao, ‘Large Language Models’ Understanding of Math: Source Criticism and Extrapolation.’ arXiv, Nov. 12, 2023. https://www.doi.org/10.48550/arXiv.2311.07618
[38]: J. Ahn, R. Verma, R. Lou, D. Liu, R. Zhang, and W. Yin, ‘Large Language Models for Mathematical Reasoning: Progresses and Challenges.’ arXiv, Jan. 31, 2024. https://www.doi.org/10.48550/arXiv.2402.00157
[39]: S. M. Walker II, ‘Breaking News: OpenAI GPT-5 — Klu’, Jan. 25, 2024. https://klu.ai/glossary/openai-gpt-5
[40]: M. Heikkilä, ‘Unpacking the hype around OpenAI’s rumored new Q* model’, MIT Technology Review, Nov. 27, 2023. https://www.technologyreview.com/2023/11/27/1083886/unpacking-the-hype-around-openais-rumored-new-q-model/
[41]: S. T. Thinker, ‘OpenAI - Q* Saga: Implications’, DARK FUTURA, Dec. 01, 2023. https://darkfutura.substack.com/p/openai-q-saga-implications
[42]: MusicWasMy1stLuv, ‘RE: Q-451-921 (Paragraph 4: QUALIA, aka Q*, Conspiracy Theory)’, r/OpenAI, Dec. 01, 2023. www.reddit.com/r/OpenAI/comments/1883fkf/re_q451921_paragraph_4_qualia_aka_q_conspiracy/
[43]: TheAIGRID, ‘NEW ‘Leaked Document Q-451-921.’ Shows We Are NOT READY For Q-STAR (GPT-5)’, TheaiGrid, Nov. 25, 2023. https://theaigrid.com/new-leaked-document-q-451-921-shows-we-are-not-ready-for-q-star-gpt-5/
[44]: TheAIGRID, ‘Open Ai’s Q* (Q Star) Explained For Beginners’, TheaiGrid, Nov. 24, 2023. https://theaigrid.com/open-ais-q-q-star-explained-for-beginners/
[45]: T. P. Investor, ‘Q-Star Figured Out a Way to Break Encryption’, Medium, Feb. 18, 2024. https://medium.com/@pareto_investor/q-star-figured-out-a-way-to-break-encryption-4e8442f21290
[46]: Ian Pitchford [@IanPitchford]:, ‘I used ChatGPT to assess the credibility of the Q-451-921 letter. https://reddit.com/r/singularity/comments/1824o9c/is_this_leaked_explanation_of_what_ilya_saw_real/ https://t.co/bgY6MQ55SU‘, Twitter, Nov. 24, 2023. https://twitter.com/IanPitchford/status/1728128349384708585
[47]: T. Keary, ‘What is the Role of Parameters in AI?’, Techopedia, Nov. 28, 2023. https://www.techopedia.com/experts/what-is-the-role-of-parameters-in-ai
[48]: F. Crick, The astonishing hypothesis: the scientific search for the soul. London: Touchstone, 1995.
[49]: T. B, ‘Parameter vs Synapse?’, Mar. 2021, https://www.lesswrong.com/posts/7htxRA4TkHERiuPYK/parameter-vs-synapse.
[50]: B. Millidge, ‘The Scale of the Brain vs Machine Learning’, Feb. 06, 2024. http://www.beren.io/2022-08-06-The-scale-of-the-brain-vs-machine-learning/
[51]: P. Godfrey-Smith, Other minds: the octopus and the evolution of intelligent life. London: William Collins, 2017.
[52]: Wikipedia, ‘GPT-1’, Wikipedia. Feb. 21, 2024. https://en.wikipedia.org/w/index.php?title=GPT-1&oldid=1209446392.
[53]: Papers with Code, ‘Papers with Code - GPT-2 Explained’, 2023. https://paperswithcode.com/method/gpt-2
[54]: Wikipedia, ‘GPT-3’, Wikipedia. Mar. 10, 2024. https://en.wikipedia.org/w/index.php?title=GPT-3&oldid=1213022190.
[55]: M. Bastian, ‘GPT-4 has more than a trillion parameters - Report’, THE DECODER, Mar. 25, 2023. https://the-decoder.com/gpt-4-has-a-trillion-parameters/
[56]: Life Architect, ‘GPT-5’, Dr Alan D. Thompson – Life Architect, Mar. 17, 2023. https://lifearchitect.ai/gpt-5/
[57]: N. Bostrom, Superintelligence: paths, dangers, strategies, Reprinted with corrections 2017. Oxford, United Kingdom New York, NY: Oxford University Press, 2017.
[58]: A. Vaswani et al., ‘Attention Is All You Need.’ arXiv, Dec. 05, 2017. http://arxiv.org/abs/1706.03762.
[59]: T. B. Lee, ‘Large language models, explained with a minimum of math and jargon’, Mar. 06, 2024. https://www.understandingai.org/p/large-language-models-explained-with
[60]: W. D. Heaven, ‘Large language models can do jaw-dropping things. But nobody knows exactly why.’, MIT Technology Review, Mar. 04, 2024. https://www.technologyreview.com/2024/03/04/1089403/large-language-models-amazing-but-nobody-knows-why/
[61]: N. Nanda, L. Chan, T. Lieberum, J. Smith, and J. Steinhardt, ‘Progress measures for grokking via mechanistic interpretability.’ arXiv, Oct. 19, 2023. http://arxiv.org/abs/2301.05217.
[62]: Z. Liu, O. Kitouni, N. Nolte, E. J. Michaud, M. Tegmark, and M. Williams, ‘Towards Understanding Grokking: An Effective Theory of Representation Learning.’ arXiv, Oct. 14, 2022. http://arxiv.org/abs/2205.10343.
[63]: E. Yudkowsky, ‘The Only Way to Deal With the Threat From AI? Shut It Down | TIME’, Mar. 29, 2023. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
[64]: M. Tegmark, Life 3.0: being human in the age of artificial intelligence. [London]: UK [New York, NY]: USA [Toronto]: Canada: Penguin Books, 2018.
[65]: L. F. Barrett, How emotions are made: the secret life of the brain, First Mariner Books edition. Boston New York: Mariner Books, 2018.
[66]: J. Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era, 1st edition. Quercus, 2023.
[67]: Wikipedia, ‘Casablanca (film)’, Wikipedia. Mar. 13, 2024. https://en.wikipedia.org/w/index.php?title=Casablanca_(film)&oldid=1213435785.
[68]: J. F. UTA’AWASE, ‘Jidai fudō uta’awase 39 | Waka Poetry’, Jan. 25, 2024. https://www.wakapoetry.net/jidai-fudo-utaawase-39/
Comments ()