Dell World sure was different this year because it was done virtually. One of the most interesting parts of Dell World is the session on the future. In past years they surfaced the coming of the Fourth Industrial Revolution and the coming wave of robotics.

This year, they spoke on a new branch of engineering that is solely AI-focused, the blended technology revolution surrounding food production, how AIs were intentionally corrupted, and how music, math, and the Internet create new entertainment types.

Let's talk about that this week, and I'll close with a product of the week that isn't quite a product yet, a technology that will transform your earbuds into smart earbuds and potentially give you superpowers.


Genevieve Bell: The 3A Institute on AI

Genevieve Bell is one of my favorite people and an Intel Senior Fellow working out of their research organization. She opened this segment on the future talking about a new effort she spearheads that pulls together many very different skills and people to create a new engineering-centric branch solely focused on artificial intelligence. I say engineering-centric because the folks involved come from a broad spectrum of backgrounds and skills that include engineers as well as anthropologists, scientists, policy experts, and even musicians.

Unlike most other efforts that seem to focus more on the technology, Genevieve's begins with studies on the history of cybernetics going back to the conferences in the 1940s and 1950s that initially defined the space. For instance, in Australia, where the effort is based, they studied the aboriginal fish traps which were useful in the early parts of the last century and successfully performed for centuries before that for guidance.

This course of study focuses more on asking questions than on problem solving. Often, jumping into problem solving forces people into a tactical approach without appreciating either the problem's full scope or the related dependencies. It can conceal potential collateral damage from the resulting solution. Focusing on questions first helps assure the problem is fully defined, creating a pathway to a more comprehensive and arguably safer solution.

The six core questions they ask about their AI efforts are:

Autonomy: Or whether the system can perform without user intervention. Does it automatically do what is intended?

Agency: Whether the output of the system is constrained to an area of defined performance. Does it just do what is intended, or will it exceed its intended parameters to do unintended things?

Assurance: This is a quality metric assuring the result is secure, safe, you can trust it, it complies with the laws that surround it, it is well regulated, and it does what was intended.

Interfaces: How does the AI communicate and interact with the world, people, and other systems? Does it interrelate and communicate both effectively and optimally?

Indicators: How do you monitor the thing? What is the external mechanism to ensure the system doesn't go rogue?

Intent: What is, was the intent of the designers, what did they want to accomplish, and does the result comply with the intent of those that created it?

(It strikes me that if the folks that created Skynet had followed these elements, the Terminator movies would have been very different.)

This group launched a master's program and currently students from all over the world have enrolled in it. In the end, it is this effort, and those like it, that will help assure that the AIs of tomorrow don't make "Terminator" unfortunately prophetic.