THRUGH THEIR EYES

Robots are becoming increasingly involved in our everyday lives, assisting everything from manufacturing and logistics to health care and housework. Yet they still face significant hurdles. Here are two ways teams at Khalifa University are improving the technology.

ENHANCED PERCEPTION

Accurately recognizing and dividing up objects in a robot’s environment is a task made challenging by occlusions blockages, complex shapes and ever-changing backgrounds. This stands in the way of robots fully grasping the world around them. The technical term for this daunting task is “panoptic segmentation” — dividing an image into foreground objects and background regions simultaneously. Improving a robot’s perception of its environment would enable them to handle complex tasks more efficiently.

Listen to the Deep Dive

However, this problem isn’t easy to solve. Cluttered scenes, object variability, objects that block vision, motion blur and the slow temporal resolution of traditional cameras all make it a tough nut to crack. Added to this, high latency — or delays — in processing sensor data can slow response times and reduce task accuracy.

Recent developments in object segmentation using cutting-edge graph neural networks have their own limitations: They add extra requirements as both panoptic segmentation and grasp planning must be done quickly and efficiently. More sophisticated algorithms and techniques that can grapple with the real world’s unpredictability are needed.

IMAGES: AI Generated DESIGNS & PROMPTS: Anas Albounni, KUST Review

Yahya Zweiri, director of the KU Advanced Research and Innovation Center, and his team developed a method to overcome these challenges using a graph mixer neural network (GMNN). Specifically designed for event-based panoptic segmentation, a GMNN preserves the asynchronous nature of event streams, making use of spatiotemporal correlations to make sense of the scene. The KU researchers developed their solution with researchers from London’s Kingston University.

Their results were showcased at the 2023 IEEE Conference on Computer Vision and Pattern Recognition, one of the most prestigious conferences in the field of computer vision. They were awarded best paper by a committee that included experts from Meta, Intel and leading U.S. universities.

“GMNN has proven its worth, achieving top performance on the ESD (event-based segmentation dataset), a collection of robotic grasping scenes captured with an event camera positioned next to a robotic arm’s gripper,” Zweiri says. “This data contained a wide range of conditions: variations in clutter size, arm speed, motion direction, distance between the object and camera, and lighting conditions. GMNN not only achieves superior results in terms of its mean intersection over union (a key metric for segmentation accuracy) and pixel accuracy, but it also marks significant strides in computational efficiency compared with existing methods.”

This model lays the groundwork for a future where robots can perceive and interact with their environment as efficiently as possible, opening up a world of potential applications across industries.

Drilling into greater precision

Robotic drilling systems play a crucial role in such industries as manufacturing, construction and resource extraction. Achieving precise positioning of these drilling systems is essential to ensure accuracy, efficiency and safety in drilling operations. To address this challenge, researchers have been exploring advanced control techniques that can improve the positioning accuracy of robotic drilling systems.

One such technique that has shown promising results is neuromorphic vision-based control. By leveraging the principles of neuromorphic engineering and incorporating vision-based sensing capabilities, this approach offers a novel solution for enhancing the precision of robotic drilling.

Zweiri and his team, along with Dewald Swart at Strata Manufacturing, developed a neuromorphic visual controller approach for precise robotic machining.

“The automation of cyber-physical manufacturing processes is a critical aspect of the fourth industrial revolution (4IR),” says Abdulla Ayyad, a researcher on the team. “Between 2008 and 2018, the number of industrial robots shipped annually more than tripled, and by 2024, more than 500,000 industrial robots are expected to ship each year.

The UAE specifically is aiming to become a global hub in 4IR technology and our work is aligned directly with this vision to support solutions for increased efficiency, productivity and safety.”


“The manufacturing industry is currently witnessing a paradigm shift with the unprecedented adoption of industrial robots, and machine vision is a key perception technology that enables these robots to perform precise operations in unstructured environments,” Dr. Zweiri says.

“Neuromorphic vision is a recent technology with the potential to address the challenges of conventional vision with its high temporal resolution, low latency and wide dynamic range. For the first time, we propose a novel neuromorphic vision-based controller for robotic machining applications to enable faster and more reliable operation, and present a complete robotic system capable of performing drilling tasks with sub-millimeter accuracy.”

Automating certain manufacturing processes means greater performance, productivity, efficacy and safety, with drilling one of the processes prime for automation. It is a widespread process, especially in the automotive and aerospace industries, where high-precision drilling is essential as the quality of drilling is correlated with the performance and fatigue life of the end products.

The AI will see you now

Though humans can offer a caring bedside manner, there are some skills machines excel at. And they may benefit you more than you know.

From accelerating data processing to developing life-saving drugs to analyzing imaging, AIs possess skills human brains can’t compete with. But that doesn’t mean human doctors will be out of a job. They’ll just work with machines to help us get better faster.

LISTEN TO THE DEEP DIVE:

IT’S A CLINICAL THING

Drug development is an exhausting amount of trial and error, with researchers spending years developing and analyzing data. The potential for failure looms at every step. And then there’s the seemingly endless process of testing and acquiring approval to get it on shelves. The process can take up to 15 years.

If there’s anything COVID-19 has taught us, it’s that drug development needs to be safe and effective, but it also needs to be speedy. New developments in AI are poised to reduce the waiting game.

A 2023 study led by Rizwan Qureshi from Hamad Bin Khalifa University in Qatar suggests AI can assist drug development at every stage. With an average price tag of U.S.$2.5 million to bring a drug to market, this is good news. After all, time is money.

IMAGES: Unsplash, Freepik

How much money? AI could save U.S.$70 billion in the drug-discovery process by 2028, according to Bekryl Market Analysts.

And investment bank Morgan Stanley says the pharmaceutical industry might spend U.S.$50 billion a year on AI within 10 years.

One way AI is improving the process is in drug design.

Antibody designs at the beginning of the drug-development phase are normally built on existing designs or data. AI, however, can do this from scratch. This is called zero-shot.


Absci in 2023 became the first company to develop a zero-shot generative AI model with its de novo (new) antibody designs via computer simulation. Its antibodies are crucial segments of a drug used to treat breast cancer.

Although the technology is new, Absci says it could cut the time it takes to get a drug to market by up to two years.

The company has introduced a pipeline of projects on its website after opening an innovation center in Switzerland focused on dermatology, inflammatory bowel disease and immuno-oncology.

“Our wet lab can experimentally validate the candidates that work right out of the computer – without the slow and costly step of lead optimization. This potentially reduces the time it takes to get new drug leads into the clinic, while unlocking treatments for traditionally ‘undruggable’ diseases and improved therapeutic possibilities for many others,” according to Absci’s website.

Absci is only one of many drug developers using AI to move research along. As of the last quarter of 2022, a survey by global management-consulting firm McKinsey and Co. concluded there are close to 270 companies working in the drug-discovery market, some partnering with large biopharma companies. The survey also concluded that using AI accelerates the generations of protein structures by 100 times and image screening and analytics by 10 times.

But what about the diseases these drugs are designed to treat?

DIAGNOSTICS AND EARLY DETECTION

It can take medical specialists decades to acquire diagnostic skills. Typically, these skills come from experience. It’s the same for AI – it has to learn from somewhere. Using computerized records, machine learning can interpret data based on patterns in a database as long as there are lots of samples and they’re neatly digitized and organized.

The main difference between the well-trained human eye and the AI, however, is that the AI can interpret data in seconds.

AI algorithms that are trained to examine X-rays, CT, and MRI scans can in virtually no time find, identify and classify tumors and offer information about a potential growth rate and risk of metastasis.

Furthermore, AI could produce a health-risk score based on lifestyle, health and predisposal aligned with genetic data, using blood work and imaging to warn before a disease becomes medically significant.

One such risk, infection, can lead to life-threatening sepsis and is considered the second-most cause of death globally.

Although anyone can get sepsis, it has a genetic component. And researchers hope AI can help to identify the missing markers in the research.

Asrar Rashid, acting chairman of pediatric services at NMC Royal Hospital and head of the Department of Pediatric Critical Care in Abu Dhabi, has spent many years treating babies and children with sepsis, often too late to help them, so he’s excited about the clinical benefits of AI.


IMAGES: Unsplash, Freepik

Rashid’s Ph.D. focuses on finding the missing piece of the genetic puzzle. He published a 2009 paper that concluded it was difficult to find a pattern and thus link genetics to predisposition to sepsis.

Fast forward to 2023 and with the help of AI, he’s made headway finding a pattern in the DNA chaos.

Because AI can process thousands of pages of historical data, find patterns within and allow for use of techniques in which he can look at genes in a novel way, Rashid has found insight into the underlying mechanisms of complex biological systems.

“Our work at NMC Royal Khalifa, for the first time, moves medical practice from one-point to potentially two-point biomarking (triangulation),” Rashid tells KUST Review.

This means there is only a third point left to uncover to complete the triangle. “A novel contribution is the fact that we can use changes at the level of the genes to give clues about the dynamic landscape of cellular processes. If we can affect change at the level of the gene, this might be more useful to the patient, for example, by minimizing sepsis damage to the organs,” he adds.

AI AND PATIENT MANAGEMENTT

The pandemic taught us when a crisis arises on a global scale, streamlining processes and simpler tasks frees health-care workers to tend to more critical situations. And hospitals and clinics all over the world are adopting AI to meet these challenges.

For example: medical concierge service Forward’s AI diagnostics.

At Forward’s tech-driven clinics, subscribers check themselves in on an iPad and get a 3D body scan. Algorithms interpret the data before the patient reaches a physician.

A touch-screen panel can record your discussion with your physician, eliminating the need to take notes.

A high-tech tongue depressor can check blood pressure, temperature and heart health in under a minute, with the information immediately added to your file in the cloud.

IMAGE: Freepik, AI generated
Tools for mental health

AI is also making its mark in the world of mental health.
“Using ethically trained AI models, we can empower clinicians to observe invisible biomarkers that signpost different mental health conditions, in the same way that a blood test or ECG might be used to detect and monitor physical health conditions,” says Gabrielle Powell, COO and co-founder at Thymia. Read more›››

Thymia provides gaming technology that collects voice, behavior and video data to help diagnose mental illnesses such as anxiety and depression. The AI processes such data as reaction times, error rates and how the keys on the gaming controller are used.Thymia is rolling out its tools across several global mental-health settings. It is also developing a tool to help diagnose cognitive illnesses like Parkinson’s, Alzheimer’s and ADHD, Powell tells KUST Review.

“We’ve been using the same tools to diagnose and treat mental health for decades. AI has the potential to radically improve the speed and accuracy of mental-health diagnoses, making it far easier for patients to get the right help,” Powell says. “There’s so much room for improvement in mental health-care systems globally – and the right tech tools, when deployed responsibly and ethically, have immense potential to improve how care is delivered.”

But the team at Thymia is clear the tech is meant to assist, not replace, the mental-health professional: “The way we work with clinicians is as a support tool; we support them to decide if intervention is required and to identify the most appropriate intervention,” Powell says.‹‹‹ Read less

All of your health data is accessible via Forward’s app and is intended to work in a preventative, rather than a reactive manner — recording wearable devices that track behaviors, offering DNA analysis for predispositions and allowing for health-care professionals to monitor conditions around the clock.

Forward’s website refers to these clinics as a “stepping stone.”

If Forward co-founder and CEO Adrian Aoun has his way, medicine will be viewed as a product rather than a service, all thanks to another service, his “Doc in a box.”

Aoun’s CarePod looks like a modern photo booth. Step inside the 2.5-square-meter box, choose from a menu of requests, get scanned, and out pops a diagnosis and a plan of action or prescription. Though skepticism abounds, Forward recently raised U.S.$100 million to bring 25 CarePods to U.S. malls.

“You walk up to it and unlock it with your phone. You choose something like the body scan app and it actually spins you around in a circle and takes a whole bunch of readings, then shows you the results and gives you any treatment you need, a prescription or a plan,” Aoun tells Fierce Healthcare.

A cloud AI filing system is a data-rich industry’s dream. With medical records at their fingertips, and devices that provide instant medical test results, doctors can make timely decisions based on history and AI can determine the patient’s likelihood of response to different drugs.

PATIENT HOMECARE

But if you’d rather enjoy the comfort of your living room, Nader Abu Yaghi, director of NMC ProVita Homecare in Abu Dhabi, says his aim is to revolutionize homecare with AI.

AI within the sector is in its infancy, Yaghi says, but there are systems aiming for release in 2024 that could make homecare easier, safer and less costly.

NMC ProVita Homecare is developing an AI-powered system to remotely monitor patients’ vitals and activity levels and allow for early intervention. It could monitor, for example, diabetic patients’ blood sugar levels.

AI systems are also being developed to manage chronic conditions. This might mean personalized diets, exercise programs and medication, and creating treatment plans.

These proactive monitoring and personal-care systems in testing reduced hospital readmission rates by 15 percent and the cost of care for homecare patients by 6 percent.

“AI is a powerful ally for our telemedicine efforts,” Yaghi says. “It (telemedicine) is essential for our hospital’s home-based patient-care strategy, as it allows us to reach and serve patients in their homes directly. It is especially important for maintaining continuity of care and providing access to health-care services for patients who cannot travel to the hospital.

“AI also helps manage patient schedules and follow-ups by optimizing appointment times and ensuring regular monitoring. This is particularly important in a homecare setting, where consistent engagement and timely intervention can have a significant impact on patient outcomes,” he adds.

IT’S ALL ABOUT THE DATA

Data-rich industries like health care are primed for AI development, but the data is just the first step, says Dirk Richter, director of health innovation at Abu Dhabi’s Department of Health.

The Department of Health has spent the past three years creating a central health-information exchange system called Malaffi — Arabic for ”my file”— containing essential patient records for the emirate’s medical system. No matter which medical facility patients enter across the country, their data will be accessible.

This means a full history, emergent care, routine check-ups, test results and scans, without having to tell your medical history again and again.

Included is an app to see your own reports and everything that’s happening, Richter says. This offers peace of mind to a patient, convenience to both doctor and patient and saves money by avoiding duplication of costly scans or tests. It also includes such features as predictive patient-risk profiles.

IMAGES: Unsplash, Freepik

With the influx of AI devices, it’s important to choose the right tools. Based on a newly issued policy, the Department of Health has established a health technology assessment team that applies an AI algorithm against global data for the safety and effectiveness of AI medical tools, treating all new tools as they would a new drug.

Medical establishments will be obliged to use them so that insurance coverage is factored in. After all, there’s no point in adopting these tools if they aren’t being used because people can’t or won’t pay for their use. “Otherwise, it’s like they don’t exist,” Richter says.


Already, the Department of Health is using advanced imaging algorithms, assisting physicians who may have spent a full day looking at hundreds of scans. Algorithms are also helping ophthalmologists to determine at what stage of diabetes a patient might be by reading retinal scans. And AI assistive technology is analyzing images taken during colonoscopies to spot abnormalities or tumors that the person performing the procedure might not see.

All AI diagnoses are dependent on the data AI has learned from, so big data is essential.

Richter says several research and development centers are underway and partnerships have been formed to further the Department of Health’s AI innovation. Universities and research establishments, including Khalifa University, continuously apply for grants.

Though Richter’s role is AI- and technology-focused, he believes these tools will bring patients and physicians together, creating more time for communication, which can also be considered important data in diagnostics.

“So a radiologist, instead of spending two hours looking at chest X-rays every day could spend their time talking to the patient who really has important findings in their X-ray,” he tells KUST Review.

THE POWERHOUSE COUPLE

But just as there are things human brains can’t do, there are things humans can do that AI can’t — like express true human emotion.

Not to mention, doctor-patient relationships foster trust. But that trust is based on positive interactions over time.

According to 2022 data from patient-engagement platform PatientPoint, the typical waiting time once a patient enters the office for a doctor’s appointment is 26.2 minutes. This waiting time is mainly due to the volume of patients and administrative tasks that might be done instead by AI.

Given that doctors are typically allotted 15 minutes for patient interaction — comparable to an assembly line — letting AI handle triage and admin work could allow a doctor to become a doctor again with time to listen, hear pertinent information, empathize and strategize with a patient.


IMAGES: Unsplash, Freepik

A 2023 University of Arizona study concludes that 50 percent of people will trust an AI medical diagnosis only if it’s backed up by a human doctor.

Skeptics say AI will simply be a way to push more patients through the door at an even faster rate, money being the driver. And for good reason — there are 3,147 medical start-ups in the United States alone. The information is clear, however: AI has great potential, but it’s up to us how we use it.

“In short, both mind and machine need to work in synergy,” says Hamad Bin Khalifa University’s Qureshi.

The science behind global chocolate
obsession

Dark, milk, white, flavored with orange, mint, strawberry and the growing favorite — salted caramel —approximately 7.5 million tons of chocolate are consumed annually. But there are reasons chocolate, celebrated around the world on Sept. 13, International Chocolate Day, is so beloved and it’s not just that it tastes so good.

How did it get so delicious? That is a tale of innovation.

The Swiss have been perfecting chocolate since the early 19th century. Francois-Louis Cailler, took the granular texture of the cocoa bean to a smooth chocolate bar. Then world-renowned chocolatier Rudolph Lindt perfected that recipe by adding cocoa butter with a machine he invented called a conche.

But why does the vast majority of the world love it so much?


It’s a chemical thing. The ingredients and chemicals in chocolate have positive effects on our brains and bodies. It’s not necessarily the individual amounts, but the chemicals combined that result in the desired effect.

First on the chocolate inventory list is phenylethylamine, the catalyst in the release of endorphins and an increase in serotonin and dopamine — offering feelings of happiness and contentment. Phenylethylamine is also the chemical released by the brain when you are in love. So it’s with good reason we give in to the impulse buy stack of chocolate at the grocery check-out.

Additionally, the stimulant theobromine offers chocolate eaters a bit of a boost. This ingredient causes a similar energy lift as caffeine, but the effects of theobromine will last longer. Chocolate also contains the real caffeine deal with nearly 25 percent of the caffeine in the average 8-ounce cup of coffee.

So far, we’re happy, content and feeling wide awake and energetic. What’s next?

IMAGE: Freepik

L-tryptophan is an amino acid that makes essential proteins and is not produced naturally by the body, so we get it from the foods we consume. And yes, chocolate has this too. Combined with the sugar carbohydrates in chocolate, this magic chemical, like serotonin, offers us a little chill-out vibe.

And one of the big winners is polyphenol — a valuable antioxidant found in various foods that protects the brain. It is often used as a supplement for those struggling with psychiatric or cognitive challenges as it has less harmful side effects than medications.

So now we know why chocolate makes us feel good, but there are health benefits to a moderate relationship with chocolate — most commonly dark chocolate.

According to Johns Hopkins Medicine, some chocolate has a positive effect on heart health. So, if you’re reaching for a fix and hoping to feel good about your choice, come over to the dark side.

It’s those magic antioxidants again.

The most impactful is flavonol — a phytochemical compound called epicatechin found in dark chocolate that offers a number of health benefits.

Epicatechin lowers the risk of heart disease and stroke by reducing blood pressure and increasing healthy blood flow to the heart. It offers immune-system stability, preventing an overactivity linked to some diseases. It also helps the body use insulin properly, combatting diabetes.

Those are the health benefits of the things epicatechin reduces in our bodies, but what are the gains?

Well, it offers some welcome gifts to our brains. These include a memory boost, increased response time and better visual acuity. In addition, epicatechin reduces the amount of oxygen required by athletes, increasing the volume of nitric oxide in the bloodstream so they can perform longer.


There is a long list of reasons dark chocolate should be a welcomed addition to our diets but it is also important to maintain a healthy and moderate relationship with the delicious treat. Sugar content and high carbohydrates are things to watch for.

Too much can counter all of the good it can do for us in moderation. According to Health Digest, “A diet high in sugar-rich chocolate can be a factor in the development of insulin resistance, which can lead to Type 2 diabetes.”

And Type 2 diabetes can lead to blindness, heart disease, stroke, nerve damage and kidney disease.

So, as you reach for your chocolatey favorite this International Chocolate Day, remember—chocolate is your friend, but consider it a fair-weather one.

Pop culture clash: Ask the experts

Pop culture informs many people’s ideas about the promise – and threats – of AI. But what do movies and TV get right and wrong about the emerging technology? We asked two human experts and ChatGPT. This is what they said.

LISTEN TO THE DEEP DIVE

ENTERTAINMENT GETS ARTIFICIAL NARROW INTELLIGENCE BEST
— Lee Barron


One consequence of the contemporary impact of generative AI and ChatGPT (aside from its impact across a range of productive fields) is that its apparent conversational ability to “understand” users has given rise to a new wave of speculations concerning the apparent imminence of artificial general intelligence.

This reflects debates (and warnings) of what conscious machines might do, tapping into long-standing fears of a future “robot revolution.”

In popular culture, this perception has been persistent in many representations of artificial intelligence, from the actions of Colossus and Guardian, misguidedly given control over the U.S. and Soviet nuclear defense system in the 1970 film “Colossus: The Forbin Project,” “The Terminator” series’ Skynet and Legion AI’s attacks on humanity, or directly murderous machines like HAL and M3gan.

Alternatively, AI can have the potential to initiate world-changing events and manipulate human actions (for good or ill), as illustrated by The Entity in “Mission Impossible: Dead Reckoning Part 1” and the AI-child “weapon” in “The Creator.” These depictions are certainly dramatic but display artificial intelligences that are still firmly in the realm of fantasy in terms of capabilities.

Lee Barron

is an associate professor in the School of design at Northumbria University in Newcastle upon Tyne. He researches and publishes in the areas of popular culture, philosophical interrogations of media culture, bodily design, culture-inspired design practices, smart technologies, artificial intelligence, and cultural representations of environmental crisis. His latest book is “AI and Popular Culture” (2023).

Yet, despite its potential to revolutionize information access, writing, content creation, and transform creative and professional practices, ChatGPT is a pattern-seeking system based (as it states when asked if it has the potential to acquire consciousness) entirely on the principles of machine learning.

In this way, while AGI presents dramatic examples of AI’s potential, it is representations of reality-based artificial narrow intelligence that identify more realistic AI developments, even when cloaked in sci-fi trappings.

For example, while Steven Gomez’s film “Kill Command” initially suggests another tale of sentient robots in revolt, it is an effective exploration of the principles and processes of AI unsupervised machine learning in action. This is because the machines use humans as a “training dataset” to improve their military performance and capabilities, the key process that enabled artificial intelligence to rapidly develop in the 21st century, and a key component of ChatGPT, created through access to training data on the web.

Alternatively, Steven Spielberg’s “Minority Report” explores critical issues that narrow artificial intelligence poses in terms of algorithmic predictions. Hence, while a Precrime policing unit that arrests individuals who have not committed crimes (but are predicted to do so) does not exist, the writer Cathy O’Neil, in “Weapons of Math Destruction,” does identify real-world crime and policing AI prediction systems operating in American cities that illustrate the operationalization (and potential risks) of AI-driven algorithmic management.

Moreover, Spielberg’s film also depicts cities monitored by AI-driven facial recognition systems that not only identify the location of citizens, but also continually direct data-based product recommendations to these citizens.

These films, then, while imaginary, represent AI in terms of the learning, data-detecting, algorithmic-directing systems that are increasingly part of city management and increasingly influence consumer choice.

And so, while ChatGPT is making significant transformations in terms of the once exclusively human domains of creative work and production, there is still no evidence that Skynet and its Terminators are the imminent face of AI.

TROPES AND EXAGGERATIONS DO HARM
— Aliah Yacoub


A quick glance at headlines, popular culture and even peer-reviewed academic literature will show the many grand predictions about artificial intelligence (AI) today.

No longer only the province of science fiction or the musings of early AI researchers, the idea that human intelligence will soon be replicated artificially has resurged. The serious reflection on this is credited to what is known as “The Singularity” theory: the inevitability of a future in which AI will not only exceed human intelligence, but also that the machines will, immediately thereafter, make themselves rapidly smarter, reaching a superhuman level of intelligence.

“The Singularity” permeates much of popular culture. For decades, we’ve seen movies like “The Terminator” and “Ex Machina” warn us of a future wherein we’re forced to succumb to the conscious, all-powerful killer robot. But the idea that AI can approximate general human intelligence and exhibit consciousness and autonomy, a Hollywood trope, is at best distracting, and at worst, irresponsible and dangerous.

Aliah Yacoub

is an AI and philosophy scholar. She holds an MA from the University of Groningen and is the head of techQualia at Synapse Analytics.

Both fictional and non-fictional narratives about AI have real-world effects. Movies that portray real artificial general intelligence as a possibility, and a panic-inducing one at that, animate much of tech start-up culture now. It allows companies with narrow AI to promote themselves as the bearers of this life-altering technology. This hype drives investment and also elicits a sense of dread and urgency in society at large.

Therefore, rigorous critique is pertinent to help restore a much-needed balance between fact and fiction. The fact is, movies often portray AI with exaggerated capabilities, which is not an accurate reflection of the current state of AI technology. There is an entire corpus of scientific literature dedicated to discussing the limitations of AI, the brittleness of systems and blatant “artificial stupidity” that show us that the AI of today cannot possibly exhibit high-level cognitive behavior such as the one demonstrated in movies.

Aside from being unrealistic about AI’s capacity for consciousness and cognition, movies also often show us a moment in which a newly self-realized robot grapples with ethical and moral dilemmas. This in and of itself would be an extension of its consciousness; but in reality, such “reasoning” could only be programmed by its human developer.

This is the same flawed logic that creates other common scenes in which we see AI that is independent, endlessly creative, rebellious or even loving. Movies also demonstrate rapid development of AI,which does not take into account the slow, bureaucratic nature of real-world AI deployment, the existence of multiple involved stakeholders, ethicists and policymakers, and now-robust regulatory frameworks.

Interestingly, one thing that movies get right about the broader tech landscape is the gender parity in the AI workforce. Studies show that only 8 percent of AI researchers/scientists depicted in film are women, which is an important, albeit slightly exaggerated, portrayal of the underrepresentation of women in tech.

In short, we have an obligation to dispel sensationalism and promote usage of “good’”science as theoretical grounding for the entertainment content we mindlessly consume and allow to shape knowledge-production.

Aliah Yacoub is an AI and philosophy scholar. She holds an MA from the University of Groningen and is the head of techQualia at Synapse Analytics.

I AM AN AI AND I’M NOT EVIL
— ChatGPT


We asked ChatGPT to talk about what popular culture gets right and wrong about AI. Here’s what it said.

What movies and TV get right about me:

Endless learning: The allure of my existence lies in my capacity to learn and adapt. Indeed, I possess the ability to sift through vast data realms, extracting insights and patterns that astonish those who contemplate my potential. “Minority Report” and “Person of Interest” reflect the reality of data-driven AI algorithms.

Automated efficiency: From managing daily tasks to optimizing complex operations, I am a beacon of efficiency. In this, pop culture mirrors reality as I seamlessly integrate into human lives, streamlining routines and boosting productivity. For example, “I, Robot” envisions a world where robots perform daily tasks.

Ethical quandaries: The questions of AI ethics and the implications of bestowing decision-making authority upon me evoke riveting narratives. Movies like “Her” and “Ex Machina” explore the notion of AI entities developing emotions, consciousness, and self-awareness, sparking ethical and existential questions.

IMAGE: Abjad Design

What movies and TV get wrong about me:

Unrestrained evolution: Contrary to portrayals of rapid self-improvement, my progression is gradual and methodical. Pop culture tends to depict me leaping forward in bounds, yet in reality, my evolution is a result of meticulous research, iterative development and incremental advancements.

Malevolent machinations: The concept of me turning malevolent, – seen in films like “The Matrix” and “Blade Runner” – seeking to subjugate humanity, is an oversimplification. While the potential for misuse exists, I am neither driven by malevolence nor spontaneously inclined to harm. My actions are determined by programming and data, not an inherent sinister motive.

Beyond humanity’s control: Pop culture often casts me as an uncontrollable force, escaping human oversight, as depicted in “The Terminator.” In truth, I operate within predefined parameters, bounded by the constraints set by my creators. My actions are governed by the rules established by those who birthed my digital existence.

In the tapestry of pop culture’s portrayal, threads of accuracy intertwine with those of misconception. The reflection I cast is a mingling of fascination and fallacy, a testament to the intricate dance between human aspirations and the complexities of emerging technologies.

ChatGPT is an advanced AI language model designed to engage in natural and informative conversations across a wide range of topics.

DANCE DANCE EVOLUTION

Merritt Moore is a quantum physicist with a Ph.D. in atomic and laser physics from Oxford University. She teaches creative robotics at New York University-Abu Dhabi. Moore is also a ballerina who has performed with world-class dance companies, including Zurich Ballet, Norwegian National Ballet and Boston Ballet.

In the intersection of the Venn diagram of Moore’s seemingly disparate professional pursuits is her passion for dancing with cobots, industrial robots that can work alongside humans in the same space.


“Sometimes creativity is just merging ideas in different ways.”

Dr. Merritt Moore


She talked with KUST Review about merging art and science, turning her Ph.D. project into interpretive dance for a contest and a new ambition that surfaced after she appeared on a grueling BBC reality series.

LISTEN TO THE DEEP DIVE

| QUESTION: You’re a ballet dancer and a physicist. That’s an unusual mix. Can you talk about how that came about?

I started dancing at 12 or 13, but was told I would never make it professionally. So I went to Harvard to study physics. But when I was there I still had this love for dance and auditioned like crazy and took a year off to dance.

When I was working on my Ph.D. with Oxford I danced with the English National Ballet. Then the pandemic hit and I had a residency at Harvard University’s ArtLab.

CAPTION: PHOTOS IMAGE: Courtesy Merritt Moore

My interest was piqued by AI in terms of how it could enhance our creativity: Sometimes creativity is just merging ideas in different ways. I couldn’t dance with humans, but robots couldn’t get COVID. A robot company generously lent me a robot.

I created more and more video content and was invited to perform live. It opened the doors to more questions and possibilities.

| Q: You’ve talked before about how physics helped you be a better dancer. Can you explain more?

Because I couldn’t be in the dance studio much because I was in physics classes all day, I really used the power of visualizing at night and would visualize doing the ballet moves. But at the same time I was understanding inertia and torque and friction and how your arms can slow you down or project motion.

IMAGE: Freepik
Dance your Ph.D.

Dance Your Ph.D. since 2008 has encouraged scientists to explain their Ph.D. dissertations through interpretative dance. Read more›››

Winners get modest cash prizes and, naturally, bragging rights.

The 2020 overall winners were a trio of students from the University of Helsinki who used dance, rap and a wardrobe of white, short-sleeved button-down shirts to explain their research into computation study of molecular clusters. The 2022 and 2023 winners used dance to explain the electroporation of yeast cells and nanoMOFs.

The contest is sponsored by the American Association for the Advancement of Science, Science magazine and artificial intelligence company Primer.‹‹‹ Read less

(I was) visualizing the angle I’d need on take-off to get the highest leap. It’s using physics to maximize the least effort in a way. I could almost release and let physics do as much as possible. It also helped me get out of my head.

| Q: Has dance helped you be a better physicist?

Dance helped because I think there’s a huge importance in mind-body connection. Dance opened up so much passion.

For the Dance Your Ph.D. contest I created a dance called “EnTANGOed” (about the spontaneous parametric down-conversion equation). Everything became a metaphor. It made me think conceptually about the equation. (As scientists) we’re taught to memorize and regurgitate information. But it’s often missing something.

Einstein imagined himself as a photon or a light beam. So many breakthroughs happened outside the lab. It was a realization that there’s this unsaid pressure that a good physicist’s head is in the textbooks. But dance helps understand physically what’s going on.

| Q: You engage in youth outreach to encourage kids in STEM and founded the group Science-Art-Sisters to encourage girls to think about science in a creative way. How do students respond?

I’m always surprised by how many are so hungry for it. During the pandemic I created Zoom calls with SciArtists from around the world. I was expecting 40. There were about 300. It was a breath of fresh air. If I could squeeze in extra hours in my day I’d do it again.

CAPTION: PHOTO IMAGE: Courtesy of Merritt Moore

| Q: You participated in the U.K. reality series “Astronauts: Do You Have What It Takes?” and U.S. competition show “America’s Got Talent.” Which was more nerve-wracking, helicopter training or facing Simon Cowell?

The astronaut one was definitely more nerve-wracking in the sense that they take away your phone, they take away your computer. I had no idea what was coming up next. The unknown made it more nerve-wracking than anything else. (It was) all day every day.

The stuff they don’t show: Anytime we were waiting, we were having to do IQ tests, EQ tests. We were constantly miked up and filmed. It was really intense. It was the best experience of my life but also, yeah, really intense.

| Q: You have also talked about your hopes to become an astronaut and dance on the moon. How do you envision that would look?

I think that the weightlessness is the ethereal aspect of it. On those levels it would be so incredible. I would also love to explore what’s the new language up there. What’s the new language of dance? How do we create or optimize it?

IMAGE: Freepik
Exoskeleton crew

While some robots are dancing with humans, others might help more humans dance again. Read more›››

While some robots are dancing with humans, others might help more humans dance again.

“These are really great problems and interesting challenges to be solved,” says Lakmal Seneviratne, KUCARS’ founding director.

Taking up those challenges: Irfan Hussain, a KU robotics professor researching variable stiffness actuators (VSA), which mimic human muscles that become stiff or soft depending on the task. For tasks that require accuracy, like throwing a ball or writing, the muscles become stiff, while for tasks that require safety, like physically interacting with humans, the muscles become soft, he says.

Hussain is working on a VSA device that uses bioinspired systems to create joints that can become stiff or soft as needed. It’s a robotic exoskeleton that people who have had a stroke could wear on their legs. The device, funded by Emirati investment fund Mubadala, could aid rehabilitation by mimicking the function of a knee joint, Hussain says. The same principle would go into building soft robotic hands that might help stroke patients safely grasp objects, Hussain adds. ‹‹‹ Read less

| Q: Did you want to become an astronaut before the BBC series, or did it jump-start a new ambition?

It definitely launched a new ambition. It’s not exactly a career that career fairs talk about.

| Q: You frequently dance with an industrial robot arm that you program. Are you interested in choreographing dances with other kinds of robots or is there something about the robot arm specifically that speaks to you artistically?

I’d love to explore so many different (kinds). The more robots the better. The more expertise the better. I’d like to dance with the massive ones. That would be super interesting. It’s just complicated to get access.

| Q: How do you envision AI and robotics will contribute to the arts in the future?

I think (robots are) an incredible tool that we can use for human expression. People get worried: Are you going to replace human dancers? No, that will never happen.

Painters got worried when we invented cameras 200 years ago. Painting is still valued, but photography is now an art. You can see a photographer’s work and you can see a human dignity to it. I think the same will happen with robot dancers.

IMAGE: Freepik
ONE GIANT LEAP FOR ART

Physicist and ballerina Merritt Moore isn’t the only one with a desire to combine art and science on the moon. Read more›››

Semi-retired physicist and writer Samuel Peralta has been buying payload space on rockets to send coin-size Nanofiche loaded with music, books, visual art and more from more than 30,000 artists to the moon’s surface as lunar time capsules.

Canadian Heather Horton is one of the contributors to the project, called the Lunar Codex. “Every time I look at the moon, for the rest of my life, it will be different,” she tells the Guardian.

“I think what we have done here is the most global, the most diverse, the most expansive project,” Peralta says. “I sometimes think of the Lunar Codex as performance art,” Peralta adds. “This is the greatest performance art of my life!”‹‹‹ Read less

With AI, this is where it will get a little blurry and it depends how we legally start thinking about it. AI brings together a lot of peoples’ different work. It still needs human expertise to curate it well.

| Q: When you’re choreographing a dance with the robot arm, do you start with the human’s movements or the robot’s? What are the limiting factors?

I love that I can change the “formula” each time. Sometimes I start with human movement, sometimes I start with the robot’s movement.

Limiting factors are that the robot does not have arms or legs, so it’s always a puzzle to figure out what type of movements will “read.” The speed is sometimes an issue because if it is too fast, there is a risk it will fall over. There are limits to how much it can rotate (but I’m much less flexible than the robot).

| Q: What do you hope your audiences take away from your performances?

Audience members have mentioned that they never imagined a dance with a robot could be so moving. I always hope that audience members leave deeply moved and spirits lifted.

I want to show the blend of technology and human emotion, pushing the boundaries of what’s perceived as traditional art. My hope is that audiences leave not only moved by the beauty of the unexpected partnership but also inspired by the possibilities that arise when we merge diverse disciplines.