We're Not in Kansas Anymore

We're Not in Kansas Anymore

Today I stumbled upon two interesting things, in a short sequence.

First, an Arxiv post authored by a team of Microsoft research scientists. It introduces AutoDev. In their words:

… we present AutoDev, a fully automated AI-driven software development framework, designed for autonomous planning and execution of intricate software engineering tasks. AutoDev enables users to define complex software engineering objectives, which are assigned to AutoDev’s autonomous AI Agents to achieve. These AI agents can perform diverse operations on a codebase, including file editing, retrieval, build processes, execution, testing, and git operations.

The second thing I saw was an article from the New York Times about how difficult it is to measure the performance of AI engines. A few lines caught my attention.

… today’s A.I. systems can pass the Turing Test with flying colors, and researchers have had to come up with new, harder evaluations.

and

But just as the SAT captures only part of a student’s intellect and ability, these tests are capable of measuring only a narrow slice of an A.I. system’s power.

Summarizing: Microsoft has developed (past tense) a system that is capable of essentially autonomous programming, with human oversight. And it’s now presented as a well-known fact—in popular, mainstream news—that AI systems pass the Turing test, of course, so we must perforce employ different approaches to evaluate them.

As somebody who has managed teams for quite a while: I am familiar with how difficult it is to evaluate intelligent agents’ performance and to match them with jobs.

Here is my question: have we already slipped into the era of artificial intelligence, without even noticing? We’ve already moved the goalposts by miles: of course the Turing test is not enough to identify intelligence, we also need self-awareness.

Are we already witnessing a form of technological discontinuity (the word singularity), where we and LLMs operate differently but synergistically?

The analogy that comes to mind (and that I was discussing with Carlo Brondi earlier today) is with viruses. Yes, they lack some of the most crucial characteristics of life (they have no metabolism, and can’t replicate on their own). However, they shape entire ecosystems, facilitate horizontal gene transfer, and can profoundly alter the behavior of their hosts.

Will AIs play a similar role?

Maybe the right question isn’t when machines will achieve superhuman capabilities, but when those capabilities will be such as to trigger self-reinforcing loops, through human behavioral modification.

After all, Rabies lyssavirus has been quite good at replicating without self-awareness.


Originally published on LinkedIn in April 2024.