CAPTURING STYLE

In a game of chess, the outsider often gauges the quality of the gameplay by the entertainment and surprise factor offered by the players’ choice of moves. A good chess match is a mind-bending dance of strategy and power: a true spectator sport.

The advent of artificial intelligence (AI) — specifically, software chess engines — seems to have painted this vibrant tableau in shades of monotony.

LISTEN TO THE DEEP DIVE

After chess world champion Garry Kasparov’s defeat against IBM’s Deep Blue in the late ‘90s, human chess players have grudgingly accepted that chess engines can beat them. However, they found solace in the fact that chess engines were utilitarian in their style: A classic chess engine may well consider tens of millions of alternative moves per second, but its playing style — especially with a limited lookahead — is boring. Unadorned and calculated. There’s none of the dynamism or creativity. None of the humanness. Developers did add in some randomness to introduce a semblance of unpredictability, but this resulted in predictable sequences of generally good moves interspersed with occasional mistakes.

As the AI meticulously generates all possible move sequences, it assigns an evaluation score to each resulting game state. This score takes into account the relative power balance between the players after a move. For example, if white gains a pawn, the score is +1, but if black gains a knight, the score for white is -3. More sophisticated engines incorporate positional information, such as the location of the pieces on the board and even factors like piece mobility and the safety of the king.

The addition of randomness forces chess engines to choose a move sequence at random from those with similar scores, or even occasionally play a randomly generated move, just to spice things up.

But these chess engines still spit out long sequences of unimaginative good moves, peppered with the odd bad one.

And, as human players know all too well, blind randomization is unlikely to land a player on a winning streak.


CAPTION: ERNESTO DAMIANI is senior director of the Robotics and Intelligent Systems Institute at Khalifa University

The subsequent generation of chess programs, powered by artificial neural networks (ANNs), learn from gameplay examples rather than predetermined formulas. These neural networks encode a binary representation of each position on the board, piece type and player color.

The outputs are the evaluation function values, leading to rapid and unpredictable gameplay as the bulk of computations are conducted during training, not live games. Developers use millions of online chess games played between humans or by humans against machines to set up training data. As the outcome of each game is known, it’s possible to more finely tune the models, selecting the best move for each scenario. Once trained, the ANN can be used by the chess engine to evaluate quickly and efficiently the score for each possible move — ANN-based chess engines become even more positional. And boring.

Google’s AlphaZero changed the landscape. It implements a new ANN-based approach that can be trained to play not just chess, but other board games. Given a game state, the AlphaZero engine computes a policy that maps the game state to the probability distribution of making each of the possible moves. In chess, that’s 4,672 possible moves — for white’s first move. For human players, this is ridiculous: There are 20 legal moves for white’s first move. And that’s the point.

DESIGN & PROMPTS: : Anas Albounni, KUST Review IMAGES: AI Generated, KUST Review.

AlphaZero includes all sorts of moves that are illegal, like selecting empty squares, selecting opponent’s pieces, making knight moves for rooks, or making long diagonal moves for pawns. It also includes moves that pass through other blocking pieces.

During training, nothing is learned or imposed about avoiding non-valid moves. The engine just post-processes the ANN output, filtering out illegal or impossible moves by setting their effective probability to zero. Then it re-normalizes the probabilities across the remaining valid moves. A philosopher could argue that this engine is devoid of ethics as it does not distinguish illegal from impossible, but the resulting ANN structure is simpler than one expressing only valid moves. On a modern processer, AlphaZero needs just a few tens of milliseconds to make a move.

This speed enabled AlphaZero to play against itself in millions of games, completing its training with reinforcement learning, which privileges moves that lie on a sequence that led to victory in the past. Of course, to know whether a move is on a winning sequence, one must complete the game, so AlphaZero reinforcement was performed by playing “fast and dumb,” i.e., using a very shallow search depth. Playing dumb in the reinforcement training phase maximizes the number of games that end in victories and defeats rather than in uninformative draws. As a result of this, AlphaZero considers fewer positions than the algorithmic chess engines of the past.


A classic chess engine may well consider tens of millions of alternative moves per second, but its playing style is boring.

AlphaZero and successors like Deep Chess have a distinctive style, steering away from merely seeking positional or material advantage. Their style is alien — best likened to atonal music: difficult to appreciate for anyone but the chess elite, and certainly useless for the average human amateur to learn or improve their game.

It is interesting that we humans still describe an intelligent chess engine’s style in positional terms.

Like the ‘90s Deep Blue victory, today’s post-AlphaZero scenario highlights some general problems that we will have to solve to be able to work together with the super-human AI engines of the future. AI decision-making must be intelligible to humans for us to accept its decisions. This interpretability needs to be wired into the AI training. Plus, interacting with humans is a crucial step for AI engine evolution: Playing against all possible competitors makes them stronger than any human can individually hope to become.

The game of chess, always a metaphor for life, suggests that controlling the evolution of future AI engines may become more akin to taming a tiger than training a pet.

Ernesto Damiani is senior director of the Robotics and Intelligent Systems Institute at Khalifa University.

Build your own robot

The first step in building any robot is to decide what you want it to do. While most of the robot’s abilities will be unlocked with clever machine learning and artificial intelligence algorithms, you need to set your robot up for success with the right mechanical features.

LISTEN TO THE DEEP DIVE

For a human eyeball, nice and round, turn to embedding light-sensitive receptors directly onto the surface of a 3D sphere like the team from the Hong Kong University of Science and Technology, UC-Berkeley and the Lawrence Berkeley National Laboratory.

You could also add a narrow bandgap semiconductor as a photosensing material — then your robot could see in the dark with infrared light sensing. In lieu of realism, you could turn to any number of sensors to have your robot “see”:

Distance sensors and gauges – maybe an ultrasonic range finder or laser measurement sensor. Positioning sensor – room navigation or indoor localization might come in handy. A GPS system or other live tracking devices will help your robot find its way around.

Thermal imaging sensors or pressure sensors are also an option.

Facial recognition – that’s some machine learning pre-programming.


LEGS
Want to jump? Forget biomimicry. Researchers at the UC Santa Barbara use an actuator system based on elasticity. It’s a spring with rubber bands and carbon fiber slats used to shoot the bot into the air.

Or keep the biomimicry but add hydraulic systems and electric motors a la Boston Dynamics’ Atlas.

You could leave humanity behind and go the marsupial route. German engineering firm Festo took it one further and developed the BionicKangaroo.

A “tendon” in its robotic leg drives it forward and captures energy on landing. The impact drives the legs into position for the next leap on its spring-loaded legs.

GRAPHICS: Abjad Design

Stanford University engineers developed a “stereotyped nature-inspired aerial grasper” or SNAG, bird-shaped feet that can perch on any branch.


WINGS
Go classic with drone design and choose rotary wings that spin to create lift and thrust like a helicopter. These are best for hovering, vertical takeoff and changing direction quickly.

Maybe you’d rather the classic plane look and have room for a runway or launcher. Fixed wings generate lift by moving through the air and offer higher speed, longer endurance and greater stability, though your robot will be at the mercy of the weather conditions.

You could even turn to the flapping wings of insects and birds. There are complex transmission systems using gears and motors available from the Harvard team that developed a solar-powered tiny robot styled after a honey bee. A team at the University of Bristol developed a tiny flying robot that flaps its wings more efficiently than an insect, using an electrostatic “zipping” mechanism (their words).


HANDS
What kind of hand does your robot need? Do you want the classic gripper, optimized for delicacy or accuracy? Or is a suction cup plenty?

How many joints does your robot arm need? You’re not limited by human anatomy here.

Many robot hands come with sensors packed into their fingertips only, but an MIT team built a robotic finger with sensors providing continuous sensing along the finger’s entire length, allowing it to accurately identify an object after grasping it just one time.

Researchers at Columbia Engineering developed a highly dexterous robot hand that can operate in the dark. It uses tactile sensors rather than vision to manipulate objects.

Data-driven energy

You might think AI’s role in the energy industry is restricted to exploration — namely, finding the goods. But that’s only a small part of what AI can do.

LISTEN TO THE DEEP DIVE

“AI is impacting almost every single aspect of the energy industry, and as the industry begins rolling out AI-based solutions and realizes the potential of this technology, we expect AI to expand into every area of the industry in some way or another,” says Chris Cooper, CEO of AIQ, an Abu Dhabi-based technology company taking on sustainability and efficiency across the energy sector.

AIQ is a joint venture between the Abu Dhabi National Oil Co. (ADNOC) and AI specialist G42.

Much of the efficiency AI offers in the industry stems from prevention — in all stages of processes.

AI AS RISK MANAGER

AIs are being used in risk aversion: Keeping track of regular machinery maintenance prevents equipment malfunctions and breakdowns.

Though preventative maintenance has been around for a long time, AI can constantly analyze the current condition of equipment and use historical and current data to determine deviations from the norm. This can bring attention to issues before they happen, reducing or eliminating failures and therefore saving millions of dollars in down time.

Not only does this save money, but AI’s preventive elements are also in charge of finding indications that environmental disasters are on the horizon.

Oil spills are one of the most impactful risks to the environment, especially the oceans. In 2022 alone, 700 metric tons of oil leaked in four oil spills. This damage is catastrophic to the ocean’s wildlife and ecosystems: Animals can die from hypothermia when the fur coats that insulate or the feathers that repel water become coated in oil. Plus, while cleaning themselves these animals ingest poison that can impact reproduction and growth rates in offspring.

Aside from things we cannot control like natural disasters, the main causes of oil spills are equipment failure and human error. AI can manage these.

The cost of human error is high. The 2010 Deepwater Horizon explosion, for example, killed 11 people and is the most expensive water oil spill in history. It cost BP and partners about U.S.$71 billion in legal fees and cleanup costs.


“Many of the AI solutions that have been developed to address efficiency and productivity will also bring improvements to overall sustainability by reducing waste, eliminating unnecessary processes and so on.”

Chris Cooper, CEO of AIQ

AI is used to monitor pipeline conditions for small amounts of deterioration, cracks, etc., offering critical real-time information. Identifying imperfections can help operators address the issues before leaks occur, accidents happen or machines fail. Predictive maintenance contributes to safety and savings across the industry and reduces unpredicted downtime, typically averaging 27 days at a price of U.S.$38 million.

ADNOC initiated pilot testing of its Centralized Predictive Analytics Diagnostics (CPAD) in 2017, and it remains at the center of the company’s digital transformation path.

“CPAD’s predictive maintenance capability can track any mechanical degradation as well as variations in performance to help maintenance teams to plan required work well in advance with consideration to any production constraints,” the website reads.

“AI technology is contributing to monitoring and maintenance again in many different areas, from computer vision being used to monitor pipelines, to sensor monitoring to detect variations in machine operations, through to chemical analysis to detect corrosion in pipelines and monitoring of remote sites or hard-to-reach locations using drones,” AIQ’s Cooper says.

AI can also monitor pressure and oil-flow rates to identify issues before they become problems, mitigating leaks and ensuring worker safety.

GRAPHICS: Abjad Design

The benefits are ample, protecting the environment, workers and the bottom line.

AI’s ability to predict demand means it is also poised to provide risk assessment for investors.

Investment analysis incorporates political and economic events, trends in oil products across the value chain, historical data, public inclination and so on. AI can collate all the information that contributes to price fluctuations to help investors make well-informed decisions.

EXACTING THE EXTRACTION

Energy companies are also harnessing AI capabilities to process data from seismic surveys, improving the accuracy of drilling locations and the drilling plans themselves.

AI algorithms can come up with new extraction techniques and create reservoir models to anticipate how different extraction approaches will work in multiple conditions. This could mean better operations and more lucrative extractions with less environmental degradation.

“Better reservoir development and planning can result in fewer wells being drilled to extract the same resources,” Cooper says.

The best way to develop and plan is to model. One AI tool that AIQ has in the market to assist with reservoir modeling is the AR360 (Advanced Reservoir 360). The 360-degree model allows for review and all-encompassing digital assessment of existing reservoir simulation models.

“The Reservoir Performance Advisor module takes advantage of machine learning, advanced analytics, petro-technical workflows and business logic,” Cooper adds. The system analyzes the data identifying wells declining in performance then provides solutions. What used to be done manually over months is now completed in minutes, improving strategic production, field development planning, cost reduction and lower emissions.

Once the oil is out of the ground and ready for the refining process, analysis ensures maximum efficiency throughout. Refining the process, not just the oil, reduces offsets and, consequently, the environmental impact.

“Many of the AI solutions that have been developed to address efficiency and productivity will also bring improvements to overall sustainability by reducing waste, eliminating unnecessary processes and so on,” Cooper tells KUST Review.

Now that the oil is produced, it’s time to get it to its final destination.

AI AS SUPPLY-CHAIN MANAGER

Supply chains around the world are improved by machine learning. Behaving proactively, rather than reactively, will not only get goods from A to B efficiently and safely, it can balance out supply and demand, saving money across the entire production process.

There are many factors built into predicting oil prices, and while some believe it’s too complex for AI to accurately predict, that isn’t stopping researchers from trying.

A team from China’s Shenzhen University found that rather than using single-model machine-learning methods for price prediction, combining multiple models coupled with specific Google Trends for the online data shows promise. This structure, the team says, results in more accurate anticipation of crude-oil price fluctuations.

The team’s AI analyzes large amounts of historical data, looks for patterns and trends and then combines this with current market information to predict demand.

Supply-chain improvement also means better navigation for ships — shorter routes, safety monitoring en route and recommendations for changing course, should the need arise. This is also a sustainability issue.

GRAPHICS: Abjad Design

Paul McStay, performance manager for oil giant Shell’s liquefied natural gas fleet, says improving efficiency can help the industry reduce carbon emissions. “If we can improve our efficiency, we can reduce the amount of time that we’re waiting at port. Then we can reduce our fuel usage. And by reducing our fuel usage, we can improve our emissions,” he says on Shell’s website.

Route planning and fleet management using operations research, mathematics and optimization techniques aren’t new, says Mohammed Omar, who chairs management science and engineering at Khalifa University.

But amplified efficiency and accuracy provided by AI across the board reduce risk, save money and lower the carbon footprint

According to AIMMS, an analytics software company that has been optimizing mathematics to help companies become more efficient since 1989, supply chains can no longer live without AI. Conversely, AI cannot live without supply chain planners.

WHAT’S NEXT?

In a 2023 survey from EY, a leading auditor of oil and gas companies, 50 percent of oil companies reported using AI in some way and 92 percent are planning to begin or add AI applications within the next five years.

With those statistics, it might be understandable for industry workers to worry about their jobs. But many developmental reports and articles insist AI will work in conjunction with humans, not replace them. They say a shift will definitely occur in human roles, but it is unlikely to result in significant job loss.

According to a 2019 EY report, “The AI revolution is already here for some, and for others, such as oil and gas, it’s just around the corner. AI and ML techniques applied to the sector have the potential to take large amounts of structured and unstructured data with a processing power far greater than a company’s workforce, creating transformative impact. What’s more, when AI and ML are coupled with human workforce capabilities, the combined collective intelligence impact has the potential to create lasting competitive advantage.”

Five years on, it seems EY’s foresight was on the money.

Cooper uses the example of AIQ’s suite of AI-enabled applications for borehole image data analysis, WellSight. The analytics are a time-saver for petrophysicists, freeing them up for more complex functions, and the system offers information to enhance planning for drilling operation.

It all sounds positive, but what’s the catch?

WE’RE STILL LEARNING

We’re still on the cusp of understanding AI’s full capabilities in any industry. That means we need people skilled in AI and machine learning. There will also be significant investment in training — training throughout the digital transformation but also retraining those shifting to different roles.

Plus, big data is what AI needs in order to perform, but if the data isn’t relevant, the AI will not meet expectations, and all that money will be wasted.


“This cutting-edge technology could be used to make our world safer and better, opening up possibilities that seemed like science fiction just a few years ago.”

Wael William Diab, of the International Organization for Standardization

Finally, the main challenge is getting the buy-in in the first place. Some businesses simply aren’t ready.

Wael William Diab, from the International Organization for Standardization, says it’s all about the mindset: “An AI-positive future is possible, but we need to actively pursue it. If we approach AI with a positive mindset, placing societal needs such as ethics and sustainability at the heart of its development, then we can unlock its full potential,” he says on the non-governmental organization’s website.

“If it is developed ethically and responsibly, AI could help to usher in a new era of innovation and inclusion,” he adds. “This cutting-edge technology could be used to make our world safer and better, opening up possibilities that seemed like science fiction just a few years ago.”

BUT SHOULD WE OR SHOULDN’T WE?

Some of the big concerns surrounding AI rollout globally are ethics and trust. But in the energy industry, buy-in concerns can typically be mitigated with transparency and education.

“If an engineer doesn’t understand how an AI draws the inference it does, then they are less likely to trust the outcomes, and won’t be able to understand any errors that might occur. So, it is important that the users have a good understanding of how the solution works,” says Cooper.

AI understanding is one aspect, but when something goes wrong, the first question is usually: Who did it? Who is accountable when an autonomous AI system is at the helm?

Cooper says it all boils down to finding balance: Balance between rules, framework and AI decision-making and when a human is required to step in.

DATA to delivery

Welcome to Industry 4.0, considered by many experts to be the fourth industrial revolution. Artificial intelligence and data analytics are a big part of it and are already changing how supply chains work. Here are just some of the ways they make getting a product from the manufacturer to your home cheaper and more efficient.

IN THE FACTORY

Generative design: An algorithm receives design parameters (such as cost and information on available materials) and generates thousands of options to find the best one.

LISTEN TO THE DEEP DIVE

Order management: AIs handle complicated order information from multiple channels.

Quality control: Sensors inspect products for defects.

Predictive maintenance: AI monitors systems and machines for early signs something is about to break down, preventing expensive factory shutdowns.

Compliance management: AI manages the red tape when the same product is sold in different markets with different regulations.

Customization: AI may be used to create such customized orders as bespoke suits and made-to-order shoes. And in a process called “reshoring” or “nearshoring,” products made far away can be customized closer to the sale point at the last minute.

IN THE WAREHOUSE

Stocking: Digital cameras monitor inventory levels and AI robots pick, sort and pack products.

Finding damaged packages: Machine learning models scan and analyze images to spot damaged objects.

Helping workers with wearable technology: Smart glasses “read” barcodes. Natural language processing helps humans work hands-free to pick items more safely.

THROUGHOUT THE PROCESS

Supply chain visibility: Internet of Things (IoT) devices provide instant information about such conditions as the location and temperature of shipments. Businesses can spot bottlenecks, manage disruptions in real time and make data-driven decisions.

Collaborative supply chains: Multiple companies use data and analytics to work together to plan and execute supply chain operations. The cooperative approach allows the companies to serve similar customers or achieve a common goal.

DELIVERIES

Optimal routes: Vehicle routing algorithms (without problems) use such factors as capacity, delivery priorities and time windows to plot the most efficient routes.

Real-time conditions: AI can monitor weather, traffic and other conditions to reroute as necessary.

Autonomous vehicles: Truck platooning technology can permit a group of vehicles to operate extremely closely, reducing wind resistance and decreasing fuel consumption for transportation between factory and warehouse or retailer. Smaller vehicles will be used for deliveries. Algorithms optimize routes while AI helps vehicles avoid collisions.

IN SEARCH OF THE METAVERSE

My office isn’t the most inspiring. It’s not bad, per se, but it’s not the peaceful lakeshore cabin conducive to creative thought and productivity that I’d like. If only it were socially acceptable to don my virtual reality (VR) headset and immerse myself in a futuristic cityscape or tropical haven and get all my work done. I want to pretend I’m floating among the stars on a spacecraft while replying to emails and writing my stories.

LISTEN TO THE DEEP DIVE

Jamie Gilpin, CMO at social media management tool Sprout Social, tells me that what I actually want is the metaverse. Sprout is one of many companies that have transitioned to a remote-first approach for its workers.

“Going to work in the metaverse may sound far-fetched but it may hold the answer to engaging workers in a virtual workspace. If your dream workspace is a beach, you might run into issues with sand getting into your keyboard,” she says. “The metaverse makes it possible to work wherever you want, without the limitations of the space. Allowing yourself to work in the environment where you feel most productive can yield incredible results.”

I had been thinking of VR, plain and simple. Is Gilpin right in saying the metaverse is the answer? What even is the metaverse?

As analysts for McKinsey and Co. wrote in a 2022 think piece, “if you’ve ever done a Google search for the term ‘metaverse,’ you’re not alone.”
Who hasn’t heard of the metaverse?

“The metaverse [was] the buzzword of 2022 in the same way that NFT was the buzzword of 2021,” says QuHarrison Terry, author of “The Metaverse Handbook: Innovating for the Internet’s Next Tectonic Shift.” “The metaverse is a fictional place imagined long before our current consumer-tech obsessions that has manifested into real progress. While the metaverse is far from a finished destination, there are thousands of people building it every second of every day.”

Herbert B. Dixon Jr. retired from the Washington, D.C., Superior Court in 2014. Before his retirement, he was overseeing the U.S. courthouse’s most modern prototype courtroom: high-def TV screens and all. Now, he’s a regular contributor to the American Bar Association’s Judges’ Journal and wrote in 2023: “The metaverse is a rapidly evolving idea. Describing the metaverse in 2023 is akin to explaining air or space travel to residents of the horse and buggy era. Every year, we see new technological advancements that a decade before would have seemed like science fiction.


“The metaverse makes it possible to work wherever you want, without the limitations of the space.”

Jamie Gilpin, CMO at Sprout Social

“The metaverse has been referred to as the three-dimensional internet and the future of the internet. My description of the future metaverse involves a digital universe (which may be real-world or imagined images) that your avatar enters to interact with other avatars.”

I don’t necessarily want an online representation of myself; I just want to pretend I’m working somewhere inspiring and quiet. But should I want to remain in my beautiful digital workspace, I’ll need an avatar to collaborate with my colleagues. They need a visible object in their digital environment that they can call “Jade” and I’ll need their avatars too. Yes, OK, online meeting platforms exist and I can change my background there and pretend I’m somewhere exotic but I want full immersion here.

Mariapina Trunfio, associate professor of economics and business management at the University of Naples, says the metaverse “defines a collective, persistent and interactive parallel reality created by synthesizing virtual worlds where people can use personal avatars to work, play and communicate with each other.”

In her 2022 Virtual Worlds paper, Trunfio explains that virtual technologies enhance the perceived immersion with the character realness of the avatars and residents: “Usually networked and situated with intelligent agents, they allow users to interact with virtual objects and intelligent agents freely, and to communicate with each other. In multiple forms, these worlds can be experienced synchronously and persistently by an unlimited number of users.”

I like the concepts of persistence and perceived immersion in Trunfio’s definition.

The McKinsey think piece also highlights that the metaverse means different things to different people:

“Some believe it’s a digital playground for friends. Others think it has the potential to be a commercial space for companies and customers. We believe both interpretations are correct. We believe the metaverse is best characterized as an evolution of today’s internet — something we are deeply immersed in, rather than something we primarily look at.”

In other words, as per the consultancy group’s working definition: “The metaverse is the emerging 3D-enabled digital space that uses virtual reality, augmented reality, and other advanced internet and semiconductor technology to allow people to have lifelike personal and business experiences online.”

ACCESS POINTS

To access the metaverse, says former-judge Dixon, the user needs “a computer programmed to access the computer-generated environment, a head-mounted visual display or goggles to see the virtual environment, an audio headset, and hand- and body-tracking, motion-detecting controllers and sensors to provide a sense of touch and feel while traveling within the environment.”

Ernesto Damiani is the senior director of the Robotics and Intelligent Systems Institute and director of the Center for Cyber Physical Systems at Khalifa University. His definition of the metaverse focuses the most on the technology needed to access the metaverse: “The metaverse is a digital, virtual space that humans wearing haptic interfaces (like helmets, gloves and visors) can enter and roam by projecting their presence as avatars. The metaverse puts together virtual reality, augmented reality and low-latency multi-party communication technology to allow people to have lifelike interactive experiences through their avatars.”

GRAPHICS: Abjad Design

I own a (VR) headset. I mostly use it for gaming. The virtual reality offers me that escape from the real world — again, picture my peaceful and inspiring work-environment goals. This total immersion isn’t the only feature of the metaverse though, and it’s not entirely practical for going about your everyday life. Enter augmented reality (AR).

Leslie Shannon likes the AR side of things. She authored “Interconnected Realities: How the Metaverse Will Transform Our Relationship With Technology Forever.” For her, the metaverse is a partly or fully digital experience that brings together people, places and information in real time in a way that transcends that which is possible in the physical world alone. She wants the metaverse to solve our problems — to be useful, not just entertaining.

“The problem is that smartphones and computers have done too well at solving the problem of delivering information and entertainment to us, exactly when and where we want it. To get this spectacular convenience, we’re prepared to pay a surprisingly high cost in terms of our connection to the people, places and things physically around us, and it’s a cost that we’re paying quite thoughtlessly today. You can probably name an incident in your own life within just the past week in which looking at a screen, rather than being present in your immediate surroundings, created a situation that caught you out socially, or made you neglect someone, or was even potentially dangerous. We’re all complicit in this one.”

How could an immersive digital world be the answer, Shannon asks. It’s not. But: “If we start thinking about a spectrum of experience, in which the far-left-hand side is 100 percent physical experiences, and the far-right-hand side is 100 percent digital experiences, then there also exists a middle point that is 50 percent physical and 50 percent digital, and sliding proportions of digital/physical mixes on either side of that middle point.”

Shannon says it’s the digital/physical mixes that deserve our attention. She calls this “interconnected realities.”

IMAGE: Abjad Design
Making the ‘metaversity’

By: Suzanne Condie Lambert

Khalifa University thinks the metaverse will be vital to the way students learn in the future. That’s why it teamed with Microsoft UAE and Hevolus Innovation for the 2023 Metaversity Hackathon, inviting student teams to create metaverse classrooms to remove physical barriers, making immersive, engaging and collaborative experiences inclusive and accessible. “One day we will have a university that is fully in the metaverse,” says Dr. Arif Sultan Al Hammadi, Khalifa’s executive vice president and KUST Review’s editor-in-chief. “Students will get the best education in the world wherever they are.” Read more›››

KU wants to be in the vanguard, and the hackathon, he adds, is a first step to getting there. Higher institutions would benefit as well, requiring fewer physical resources. Al Hammadi points to the example of medical school cadavers, which are expensive and may pose ethical concerns.

Schools are already using interactive 2D screens to reduce the number of cadavers required to teach anatomy, he says. A 3D metaverse could be the next leap forward. There are downsides, Al Hammadi says. Cheating is harder to detect. The physical experience of labs and experiments can’t yet be fully replicated. And distance learning doesn’t offer the same social life as on-campus classes.

But Al Hammadi says that as models improve, students will eventually be able to get much of the same experience in the metaverse. Hadi Otrok, a KU professor of electrical engineering and computer science, sees promise especially in using avatars to free instructors from small tasks, like running tutorials. “The challenge will be,” he says, “how to get the students … engaged with you instead of on the phone.”

It will take courage to take these ideas and create a fully interactive online experience, Al Hammadi says, suggesting that a potential “metaversity” could start with just one degree to prove the concept. And Khalifa University, he says, wants to be on the front end of imagining that future. ‹‹‹ Read less

“This concept of the metaverse is a world in which we can have the compelling, fascinating, relevant content that we currently access on screens, but integrated visually into our physical world in a way that enhances our lives, rather than removing us from them. This concept of the metaverse imagines the digital and physical aspects being incorporated with each other on a constantly sliding scale, so that sometimes we are fully immersed in a digital world, when that serves the purpose of the moment, but it is also possible to spend significant time fully immersed only in the physical world.

“This metaverse of interconnected realities will be a place where we combine digital information or entertainment from the world of the internet with our physical surroundings so that we can be more efficient, more informed, more delighted and more aware than we are today. A simple example of this enhanced future might be a sensor in my oven that connects with my AR glasses and, when the oven is on, displays its current temperature in a visual digital overlay when my gaze lingers on my oven for more than one or two seconds -– useful when I’m on the other side of the kitchen.”

Are we talking about a heads-up display (HUD) fixed permanently in my vision? I’d quite like that. I wear glasses anyway. It would be so helpful if people in real life had little tags above their heads to remind me of their names — facial recognition in VR land. Or a mini-map in the corner of my field of view so I’d never get lost again, video game-style.

After all, HUDs aren’t new. In aviation, they date to the end of the Second World War when rudimentary systems were installed in a few military aircraft.

The modern-day fighter pilot helmet boasts an impressive HUD, and Iron Man had one too. Granted, Iron Man belongs to the realm of fiction, but plenty of technology emerged from the minds of creators and novelists — including the term “metaverse.”

“The term ‘metaverse’ was coined by author Neal Stephenson in his 1992 novel ‘Snow Crash,’” says Matthew Ball, author of “Metaverse and How It Will Revolutionize Everything.” “For all its influence, Stephenson’s book provided no specific definition of the metaverse, but what he described was a persistent virtual world that reached, interacted with, and affected nearly every part of human existence.”

There’s that persistence again.

The “affecting nearly every part of human existence” thing I’m not so keen on.

EVERYONE EVERYWHERE ALL AT ONCE?

“The metaverse is a vast, immersive virtual world simultaneously accessible by millions of people through highly customizable avatars and powerful experience creation tools integrated with the offline world through its virtual economy and external technology,” Wagner James Au says in his book “Making a Metaverse That Matters: From Snow Crash and Second Life to a Virtual World Worth Fighting For.” He also says, however, that the metaverse is not for everyone:

“Chances are you’ve seen more than several tech evangelists across various media outlets insist that we’ll all soon be in the metaverse. I can tell you from painful — but also amusing — experience that this is unlikely ever to be the case. And, no, you probably won’t wear a VR headset on a regular basis either.

That said, it’s also safe to say at least one in four people with Internet connectivity will be part of the metaverse on some level. At a very conservative estimate, over half a billion people worldwide already use one or more variations of a metaverse platform now, from Minecraft and Roblox to Fortnite, VRChat and Second Life. That’s about 1 in 10 of the 5 billion people across the planet who use the internet.”

The majority of Au’s examples are games. Gaming companies are the pioneers in the metaverse space, well known as early adopters and prototype metaverse builders. Minecraft and Fortnite offer virtual worlds where players meet as avatars to play games and chat. They offer in-game payment systems and in-game assets that travel with players across platforms: from PC to console to mobile. They are also social spaces where gamers forge online relationships and communities.

IMAGE: Abjad Design

This gaming-world innovation correlates closely with many working definitions I found of the metaverse concept. Indeed, Ian Khan, author of “Metaverse for Dummies,” says the metaverse refers to virtual reality-based online worlds and notes that many of these worlds are gaming environments or online games. “Others function more as online virtual places where you can do other activities such as meet people, learn new things or simply hang out. And the types of virtual worlds you can find in the metaverse continue to expand and are likely to continue to evolve.”

Many of the experts I found, however, wouldn’t say we have a metaverse yet.

Dixon says the metaverse does not yet exist, but “its ultimate scope is constrained only be the limits of human imagination.”

Aakansha Saxena, assistant professor at the School of Information Technology, AI and Cyber Security, Rashtriya Raksha University, calls the metaverse a “concept”: “It can be understood as an infinite universe where communities of people can collaborate and enjoy the mechanisms of augmented reality, virtual reality, extended reality, online life and much more.”

That sounds like many of these games to me.

Khaled Salah, professor of electrical engineering and computer science at Khalifa University, throws a spanner in the works with his definition, saying: “A metaverse is an immersive and 3D virtual world in which people can interact through avatars to carry out their daily interactions, unlocking the potential to communicate, transact and experience new opportunities on a global scale.”

I’m struck by his use of article: “a metaverse” not “the metaverse.” Of all the people I asked, books I read and research articles I consulted, Salah was the only person to raise the question of multiple metaverses. Does each gaming platform or each individual game have its own metaverse?

And if each platform has its own, how can we move seamlessly between them all?

Maybe Mark Zuckerberg, CEO of Meta, has the answer. He said on the Lex Fridman Podcast that the metaverse is not a construct of connected virtual places:

“Instead, the metaverse is the point of time when we do and spend large portions of our everyday digital work and leisure in immersive 3D environments with VR and AR glasses.”

Meta, of course, used to be Facebook, and the company changed its name in 2021 to highlight its new direction. The company has since announced a U.S.$2.5 million investment supporting independent academic research across Europe into metaverse technologies because “since no one company will own and operate the metaverse, this will require collaboration and cooperation.”

Terry, author of “The Metaverse Handbook,” sums it up:

“Let me clear the air and first tell you what the metaverse is not. The metaverse is not a single technology. It’s not just a place we’ll visit in VR. It’s not something that can be created and claimed by the next Bezos or Gates. In fact, the metaverse is about as boundless and unownable as the internet, if not more so. Sure, there are entities that have contributed more to the internet than others. Of course, there are innovations that steered the course of the internet and influenced the experience of the web. But we didn’t wake up one day with the internet we see now. It was an ever-evolving thing.”

WHERE ARE WE GOING WITH THIS?

“The metaverse in the early 2020s is the equivalent of the mid-1990s in the development of the internet: Many people are talking about it, a few people are already building it, but no one can really define what it is, or what it will be able to do for us, or even if it will be relevant to anyone at all once it’s here,” Shannon says.

Khan, author of “Metaverse for Dummies,” agrees that in terms of development, the metaverse today is where the internet was in the 1990s:

“The early internet was shaped by new ideas, technologies and ways of doing things. With the right investments, adoption and usage, the internet grew into the internet we know today. Similarly, the metaverse today provides an interesting place for many activities, but many of them are still in the early days of development. The investment and attention put into building the metaverse over the next five to ten years will determine what the metaverse ultimately becomes and the value it creates.”


“The metaverse becomes more real every time we replace a physical habit with a digital equivalent.”

QuHarrison Terry, author of The Metaverse Handbook: Innovating for the Internet’s Next Tectonic Shift

Per University of Naples’ Trunfio:

“The metaverse, like many innovations, is shrouded in mysticism and skepticism. If many believe it will be revolutionary and fully transform how people work, shop, socialize and play, others are skeptical, and see it as a fad. However, whether or not we think of the metaverse as a technological revolution, it is undeniable that the massive diffusion of this technology will impact on nearly all aspects of life and business in the next decade, allowing interaction in virtual and augmented spaces and a blend of both.”

Whether you’d say the metaverse is here already or well on its way, it’s clear that it’s the next big disruptor, the new place to be for all aspects of life.

After everything I’ve read and all the people I’ve spoken to, I think it’s funny that the definition of metaverse that resonates most with me is much more abstract than the very scientific approaches I’d usually turn to.

Shaan Puri, tech entrepreneur, posted a tweet in 2021 that sums it all up pretty nicely:

“The metaverse is the moment in time where our digital life is worth more to us than our physical life.”

Or as Terry puts it: “The metaverse is not just a place we’ll visit in VR. It is not a destination. The metaverse is a movement — a movement toward the digital-first livelihood we’ve slowly been adopting year over year, app by app. The metaverse becomes more real every time we replace a physical habit with a digital equivalent. We, the digital citizens of the internet, are manifesting the metaverse by trading time in the physical world for time online.”

I’m OK with this.