Say ‘ahhh’

When the doctor tells you to stick out your tongue and say “ahhh,” he’s usually using a tongue depressor to move it out of the way to get a look at your throat. But the look of the tongue itself can tell a physician a lot about a person’s overall health, and now thermal imaging and AI are joining the tongue-diagnosis game that’s been around for centuries.

Traditional Chinese Medicine, or TCM, has been using the tongue as a diagnostic tool for at least 3,000 years.

It observes three tongue criteria to reveal our health: color, shape and type of coating covering the surface. For example, a healthy tongue would be some shade of pink but if it’s dark red, it might indicate sleep issues or anxiety, and a bluish tinge could indicate poor circulation.

While TCM uses the tongue as a main diagnostic tool, Western medicine might observe the tongue’s condition alongside many other indicators, like medical history and lab results.

This “gap” between the two, however, is nearing bridge status as technology develops — thermal imaging and AI-powered tools in particular.

A team of researchers recently introduced an AI health detector tool designed for TCM using thermal radiation image recognition and showcasing the seamless integration of human computer interaction (HCI) principles into health-care applications.


Infrared thermography captures detailed tongue images and records tongue-heat distribution to create thermal images that represent temperature variations.

The team says its portable, hand-held thermal radiation diagnostic tool, integrated with HCI, and created in collaboration with TCM practitioners, sets their research apart.

The dental mark tongue recognition model, using DenseNet T algorithm architecture, resulted in an average accuracy of 25 percent higher than other dentate tongue-recognition models that are designed to standardize and automate traditional Chinese medicine tongue diagnostics.

Another recent advance in tongue diagnosis leans on AI and machine learning for results.

A paper, published in Technologies, presents a new computer vision system that analyzes tongue color changes, offering potential for real-time diagnosis.

These analyses and machine learning predict health conditions with an accuracy exceeding 98 percent.

The researchers used a webcam to capture images in real time of both sick and healthy individuals and were able to differentiate between them simply by tongue color.

The system applies six machine learning algorithms to classify tongue images under a variety of lighting conditions.

“There have been studies where people tried to (diagnose via tongue color) without a controlled lighting environment, but the color is very subjective,” says co-author Javaan Chahl of the University of Australia.

The model was trained on more than 5,000 images across seven color classes. The results show that AI systems for tongue diagnosis are accurate, efficient, cost-effective and non-invasive. This is particularly important in areas with minimal access to health care, addressing the impact of lighting on the colors of the tongue, a key challenge for tongue diagnosis.

So, the next time you’re looking in the mirror, make sure to observe the conditions of your tongue and see what might be a little out of the ordinary. Sticking out your tongue at yourself might just be the key to preventing health issues.

More like this: How often you breathe could help detect Alzheimer’s

AI builds super-charged proteins

Scientists at the University of Illinois have made it possible to command a computer to enhance a protein and a robot does all the work — no PhD needed. This new system mixes artificial intelligence with lab automation.

In a recent study, published in Nature Communications, researchers showed off a robotic setup that takes a protein’s fundamental makeup, experiments with hundreds of tiny tweaks and finds the best-performing version without any human stepping in to decide what to try next. The result is enzymes that work significantly better than before.

The team succeeded in boosting one plant enzyme’s ability to pick the right chemical by 90 times and made it 16 times faster at completing its job. They also upgraded a bacterial enzyme to work 26 times better at a pH level important for animal feed, potentially helping farmers and food producers.

The ease of the platform’s use is highly notable as it was trained to predict useful changes and could easily be operated by a layperson with simple, plain English commands. The testing, planning and analysis are all taken care of inside a modular robotic lab.

This could accelerate methods of creating better medicines, greener chemicals and more efficient industrial processes as protein design can be as simple as giving a computer a task.

More like this: The AI will see you now

Croptimal choices

The long haul toward food security begins at the source, and precision farming is capitalizing on the latest technologies to feed the world while ensuring we still have a habitable Earth.

Agriculture has a long list of impacts on the planet from water use to pesticides. And the more we farm, the more impact we make. Fortunately, a revolution in farming technologies is helping farmers maintain yields and honor the land that provides them.

“Good farmers, who take seriously their duties as stewards of creation and of their land’s inheritors, contribute to the welfare of society in more ways than society usually acknowledges, or even knows.

These farmers produce valuable goods, of course; but they also conserve soil, they conserve water, they conserve wildlife, they conserve open space, they conserve scenery,” wrote Wendell Berry, American writer and environmental activist, in his book “Bringing It to the Table: On Farming and Food.”

Randy Price, precision farming specialist at Louisiana State University Agricultural Center, says precision farming has ample benefits for farmers, consumers and the environment and presents solutions of how farmers can live up to this standard.

Pesticides protect the crop and the global population’s food supply, but they have a significant impact on the environment.

According to a 2023 study out of Chang Mai University in Thailand, “The transport of pesticides from crop-growing regions has resulted in widespread contamination, not only of soils, water bodies, and/or crops but also of the atmosphere via various pathways.” Precision farming technology, however, might be a part of the solution.

Send in the drones

Drone technology can help, Louisiana State’s Price says. “Drones are allowing farmers and consultants to obtain overhead images of farm fields and land areas at greatly reduced prices over satellite and other methods.”

Drones can be fitted with sensors and imaging technology, and this data plays an integral role in active farming. Among other uses, the data can help farmers identify fungal contaminations, pest infestations or areas of growth congestion.

Identifying these issues early and targeting specific locations eliminates the need to spray entire crops with pesticides — which means less toxicants in the air, soil and food supply: better for the land, better for the consumer, less costly for the farmer and safer for farm workers.

IMAGE: AI Generated, KUST Review
In the greenhouse

While other innovators are focused on open farmland, the researchers at Khalifa University are looking at ways to automate greenhouses. Read more›››

“We have a significant community of scholars working in the area,” says Lakmal Seneviratne, director of the university’s Center for Autonomous Robotic Systems.

Research focuses on using robots, whether drones or mounted on rails, to collect information about plant health and readiness for harvest. Machine-learning resources help predict disease and fruit yields and analyze soils, he adds.

“Tactile devices (could also) predict fruit ripeness,” Seneviratne says. KU is partnering with UAE agtech giant Silal on a 2,000-square-meter greenhouse in Al Ain, but commercial greenhouses could easily be hectares in size, he says.

For now, the project is focused on strawberries, blueberries and tomatoes.

KU is also partnered with ASPIRE’s International Virtual Research Institute for Food Security in the Drylands. “A lot of investment is happening in the UAE,” Seneviratne says.‹‹‹ Read less

Once the problem is identified, a drone is programmed to spray the affected area with the appropriate pesticide avoiding overuse. Price says the more common precision tools are yield monitors.

This technology allows farmers to determine their crop yield within a specific unit area of their land and perform on-farm analysis, allowing for informed planning and decision-making. Understanding which areas are underperforming or overperforming is crucial to this process. Monitors and analysis assist irrigation allotment, fertilizer volumes and crop rotation.

Research also includes testing. “They will try different application rates (fertilizer, irrigation, additives, etc.) on small areas of a field, such as twelve rows plot down the whole field, etc., and then use the yield monitor at the end of the year to quickly (and easily) see the differences in that plot,” Price says.

Mapping the land

All of this information helps farmers create a prescription map of their land — something Price says is challenging and labor intensive. He says he believes for areas over 3,000 acres, mapping needs to be easier. The knowledge bases are inadequate at this stage and still required are “systems that will convert remote sensing data into actual disease and pest damage assessments.”

He and his team are working to make this happen with automatic flying drones.

“They take off, fly a field, land and recharge automatically,” he says, adding that low-level flights that record data at 10 meters from the crop surface allow high-resolution images of plant leaves to be recorded (with location) for automatic analysis with AI and other techniques.

Price’s team has been collaborating with several companies to create automated flight platforms for remote-sensing drones and additional yield monitors for sugarcane.

Price says AI will be the major contributor going forward to analyze crop damage and assess pests and disease. This would allow for fully automated treatment by sprayer drones. The drones then would collect the next remote-sensing data for analysis. Assess, treat and repeat.

In addition to crop health, AI offers data-driven decision-making opportunities for soil conditions and weather patterns.

“Over time, precision farming should allow farmers to more precisely treat various areas of land, without over-treating other areas and create a more sustainable agriculture,” Price tells KUST Review.

AQUA BOTS

In a world hungry for nutritious food, aquaculture is clearly a winning idea.

It isn’t a new one, either. Humans have been farming seafood for millennia. In more recent years, aquaculture has expanded to land-based tanks, where farmers raise fish and other seafood. Those tanks, however, take up increasingly valuable space on land and worsen competition for scarce water and other supplies.

Hear the writer read this story

This has more farmers looking back to the sea, where space is abundant and water and nutrients are free. Mariculture, the subset of aquaculture in the open seas, however, presents additional challenges.

A UAE tradition

Robotics could be on tap to move traditional Emirati fishing techniques into the future. Read more›››

The robots Lakmal Seneviratne and his team are working on at Khalifa University could eventually be employed to clean and repair hadra – fence traps placed perpendicular to shore – and gargour – fishing traps woven from palm leaves into a semicircular form, he says. ‹‹‹ Read less

Traditional mariculture relies on intensive manual labor to clean and repair equipment, monitor conditions, inspect nets and care for the plants and animals raised for human markets. That kind of manual labor is expensive, requiring trained commercial divers who are increasingly spread thin as aquaculture operations expand. It can also be dangerous work for those divers, particularly as farms move out into deeper and more perilous waters.

Mariculture can also pose threats for the environment, spreading disease, antibiotics and parasites or allowing farmed fish to escape and negatively affect native species.

Eleni Kelasidi, a senior researcher at SINTEF, one of Europe’s largest independent research organizations, thinks those issues could have a common solution: robots.

Putting a robot into the open water can be a bigger challenge, however, than putting a robot on the land.

For one thing, Kelasidi says, it’s important that autonomous systems do not harm farmed fish and/or damage the flexible structures.

This is both an ethical and economic consideration, she says. The ethical consideration: “We cannot harm any living thing and/or let them to escape from the fish farms.” The economic: “The fish are the profit of the industry.”

Happy fish

Kelasidi and her team have access to industrial scale fish farms and operate full scale research facility to investigate how robots stress or otherwise affect fish using equipment originally designed for the oil and gas industry. They test systems to see how well they function but also to observe how fish react to, say, different colors, sounds or lights. The goal is to learn what stresses fish and ensure healthier fish stocks and better profits.

Humans on the surface currently perform many aquaculture jobs using remotely operated machines, she notes.

“Our job is to cut the dependence from the humans to get the robotic systems to operate themselves. They need to understand their environment and make sure they don’t collide with structures,” Kelasidi says.

Another challenge for researchers, she says: making remote-operating vehicles “more clever.”

‘An exciting frontier’

Self-operating aquatic systems is an issue Lakmal Seneviratne, director of the Center for Robotics and Autonomous Systems at Khalifa University, is working on as well, and he’s optimistic.

CAPTION: Aquabots from Khalifa University

“It’s a very exciting frontier in underwater robotics,” he says, noting that 70 percent of the Earth is water but humans have explored only 5 percent of that.

Seneviratne and his team are also working on land-based agricultural robots such as “dogs” that can step lightly between rows of crops; “hands” that can gently pick fragile fruits; and robots on rails that can move up and down a field to monitor individual plants for signs of disease or readiness for harvest.

But ocean farms present a different set of challenges for autonomous systems.

“The problem isn’t that aquaculture is very deep, but (maintaining) navigation and control,” Seneviratne says, echoing Kelasidi’s concerns.

GPS doesn’t work beneath the water’s surface and robots have to be able to navigate currents and waves without damaging each other or farm structures.

Cameras, to capture images, and artificial intelligence, to sharpen and analyze those images, are important to managing these conditions, he says.

Looking to nature

But being able to see in the murky depths is only part of the issue for mariculture robotics. The machines also need control. So researchers are looking at life forms already adapted to aquatic environments for inspiration. Although not specifically designed for aquaculture, the biomimicry could prove useful in ocean farms. Among the ideas:

Aquaculture’s promise and challenges

As the world’s population grows and climate change puts more pressure on traditional terrestrial farming, sustainable aquaculture could play a key role, says Naveed Nabi, an assistant professor at Chandigarh University. Read more›››

“In the present times, when food security is a matter of serious concern, aquaculture has played a key role to mitigate this crisis, supplying about 178 million tons of food in which 20.2 kg per capita is destined for human consumption,” he says. “Aquaculture not only adds resilience to the global food system through improving resource-use efficiencies, but also by diversifying the farmed species.”

But he warns that farmed fish present challenges to the environment including fish escapees that harm native species and the spread of disease and parasites.

There’s also the issues of eutrophication, in which water becomes overloaded with nutrients, leading to deadly algae blooms; antibiotics in the environment through unconsumed food or fish waste; and threats associated with pesticides. ‹‹‹ Read less

A team from Harvard and the University of South Carolina in 2021 presented the Finbot, which uses four independently controllable fins.

In 2023, a team from Zhejiang University, China, in 2023 published results of their Copebot, designed to mimic the copepod, a small crustacean known to escape from predators with explosive jumps. Their bot, they report, was able to leap out of the water, land on a small pad, transmit data and jump back into the water.

Back at Khalifa University, meanwhile, researchers have other ideas.

“Looking at aquatic environments, many animals evolved flexible or completely soft bodies to improve their swimming capability and adaptability to the intricate underwater world,” says Federico Renda, who heads the team. “For instance, octopuses can squeeze into small apertures to hide or catch prey, and jellyfish developed the most efficient locomotion strategy of all. In my team, we take inspiration from soft creatures to build new underwater robots capable of replicating these functionalities while understanding the physical principles involved.”

One of KU’s designs mimics flagella, the whiplike structures that propel bacteria through liquid to solve another issue with underwater robots: Many are tethered. While the tethers allow the machines to be operated from the surface, they can also become tangled together.

“Recently, we have developed an untethered underwater robot inspired by flagellate microorganisms capable of efficient and safe locomotion in close proximity to sensible underwater habitats,” Renda says. “Furthermore, each flagellum can be used as a coiling gripper in addition to propulsion, achieving redundancy and multifunctionality, which can significantly simplify underwater operations.”

To test robots’ ability to navigate choppy waters, Khalifa University built a wave pool that simulates currents. Stanford University’s Oussama Khatib recently used it to run Ocean One, a humanoid robot designed to perform such tasks as monitor coral reefs and offshore oil rigs, through its paces.

SINTEF’s Kelasidi would like to see robots replace human divers or assist them on highly risky operations. Seneviratne likewise expects robots to allow human divers to inspect more often and longer.

“We see robots as helping divers instead of replacing them,” he says.

Pop culture clash: Ask the experts

Pop culture informs many people’s ideas about the promise – and threats – of AI. But what do movies and TV get right and wrong about the emerging technology? We asked two human experts and ChatGPT. This is what they said.

LISTEN TO THE DEEP DIVE

ENTERTAINMENT GETS ARTIFICIAL NARROW INTELLIGENCE BEST
— Lee Barron


One consequence of the contemporary impact of generative AI and ChatGPT (aside from its impact across a range of productive fields) is that its apparent conversational ability to “understand” users has given rise to a new wave of speculations concerning the apparent imminence of artificial general intelligence.

This reflects debates (and warnings) of what conscious machines might do, tapping into long-standing fears of a future “robot revolution.”

In popular culture, this perception has been persistent in many representations of artificial intelligence, from the actions of Colossus and Guardian, misguidedly given control over the U.S. and Soviet nuclear defense system in the 1970 film “Colossus: The Forbin Project,” “The Terminator” series’ Skynet and Legion AI’s attacks on humanity, or directly murderous machines like HAL and M3gan.

Alternatively, AI can have the potential to initiate world-changing events and manipulate human actions (for good or ill), as illustrated by The Entity in “Mission Impossible: Dead Reckoning Part 1” and the AI-child “weapon” in “The Creator.” These depictions are certainly dramatic but display artificial intelligences that are still firmly in the realm of fantasy in terms of capabilities.

Lee Barron

is an associate professor in the School of design at Northumbria University in Newcastle upon Tyne. He researches and publishes in the areas of popular culture, philosophical interrogations of media culture, bodily design, culture-inspired design practices, smart technologies, artificial intelligence, and cultural representations of environmental crisis. His latest book is “AI and Popular Culture” (2023).

Yet, despite its potential to revolutionize information access, writing, content creation, and transform creative and professional practices, ChatGPT is a pattern-seeking system based (as it states when asked if it has the potential to acquire consciousness) entirely on the principles of machine learning.

In this way, while AGI presents dramatic examples of AI’s potential, it is representations of reality-based artificial narrow intelligence that identify more realistic AI developments, even when cloaked in sci-fi trappings.

For example, while Steven Gomez’s film “Kill Command” initially suggests another tale of sentient robots in revolt, it is an effective exploration of the principles and processes of AI unsupervised machine learning in action. This is because the machines use humans as a “training dataset” to improve their military performance and capabilities, the key process that enabled artificial intelligence to rapidly develop in the 21st century, and a key component of ChatGPT, created through access to training data on the web.

Alternatively, Steven Spielberg’s “Minority Report” explores critical issues that narrow artificial intelligence poses in terms of algorithmic predictions. Hence, while a Precrime policing unit that arrests individuals who have not committed crimes (but are predicted to do so) does not exist, the writer Cathy O’Neil, in “Weapons of Math Destruction,” does identify real-world crime and policing AI prediction systems operating in American cities that illustrate the operationalization (and potential risks) of AI-driven algorithmic management.

Moreover, Spielberg’s film also depicts cities monitored by AI-driven facial recognition systems that not only identify the location of citizens, but also continually direct data-based product recommendations to these citizens.

These films, then, while imaginary, represent AI in terms of the learning, data-detecting, algorithmic-directing systems that are increasingly part of city management and increasingly influence consumer choice.

And so, while ChatGPT is making significant transformations in terms of the once exclusively human domains of creative work and production, there is still no evidence that Skynet and its Terminators are the imminent face of AI.

TROPES AND EXAGGERATIONS DO HARM
— Aliah Yacoub


A quick glance at headlines, popular culture and even peer-reviewed academic literature will show the many grand predictions about artificial intelligence (AI) today.

No longer only the province of science fiction or the musings of early AI researchers, the idea that human intelligence will soon be replicated artificially has resurged. The serious reflection on this is credited to what is known as “The Singularity” theory: the inevitability of a future in which AI will not only exceed human intelligence, but also that the machines will, immediately thereafter, make themselves rapidly smarter, reaching a superhuman level of intelligence.

“The Singularity” permeates much of popular culture. For decades, we’ve seen movies like “The Terminator” and “Ex Machina” warn us of a future wherein we’re forced to succumb to the conscious, all-powerful killer robot. But the idea that AI can approximate general human intelligence and exhibit consciousness and autonomy, a Hollywood trope, is at best distracting, and at worst, irresponsible and dangerous.

Aliah Yacoub

is an AI and philosophy scholar. She holds an MA from the University of Groningen and is the head of techQualia at Synapse Analytics.

Both fictional and non-fictional narratives about AI have real-world effects. Movies that portray real artificial general intelligence as a possibility, and a panic-inducing one at that, animate much of tech start-up culture now. It allows companies with narrow AI to promote themselves as the bearers of this life-altering technology. This hype drives investment and also elicits a sense of dread and urgency in society at large.

Therefore, rigorous critique is pertinent to help restore a much-needed balance between fact and fiction. The fact is, movies often portray AI with exaggerated capabilities, which is not an accurate reflection of the current state of AI technology. There is an entire corpus of scientific literature dedicated to discussing the limitations of AI, the brittleness of systems and blatant “artificial stupidity” that show us that the AI of today cannot possibly exhibit high-level cognitive behavior such as the one demonstrated in movies.

Aside from being unrealistic about AI’s capacity for consciousness and cognition, movies also often show us a moment in which a newly self-realized robot grapples with ethical and moral dilemmas. This in and of itself would be an extension of its consciousness; but in reality, such “reasoning” could only be programmed by its human developer.

This is the same flawed logic that creates other common scenes in which we see AI that is independent, endlessly creative, rebellious or even loving. Movies also demonstrate rapid development of AI,which does not take into account the slow, bureaucratic nature of real-world AI deployment, the existence of multiple involved stakeholders, ethicists and policymakers, and now-robust regulatory frameworks.

Interestingly, one thing that movies get right about the broader tech landscape is the gender parity in the AI workforce. Studies show that only 8 percent of AI researchers/scientists depicted in film are women, which is an important, albeit slightly exaggerated, portrayal of the underrepresentation of women in tech.

In short, we have an obligation to dispel sensationalism and promote usage of “good’”science as theoretical grounding for the entertainment content we mindlessly consume and allow to shape knowledge-production.

Aliah Yacoub is an AI and philosophy scholar. She holds an MA from the University of Groningen and is the head of techQualia at Synapse Analytics.

I AM AN AI AND I’M NOT EVIL
— ChatGPT


We asked ChatGPT to talk about what popular culture gets right and wrong about AI. Here’s what it said.

What movies and TV get right about me:

Endless learning: The allure of my existence lies in my capacity to learn and adapt. Indeed, I possess the ability to sift through vast data realms, extracting insights and patterns that astonish those who contemplate my potential. “Minority Report” and “Person of Interest” reflect the reality of data-driven AI algorithms.

Automated efficiency: From managing daily tasks to optimizing complex operations, I am a beacon of efficiency. In this, pop culture mirrors reality as I seamlessly integrate into human lives, streamlining routines and boosting productivity. For example, “I, Robot” envisions a world where robots perform daily tasks.

Ethical quandaries: The questions of AI ethics and the implications of bestowing decision-making authority upon me evoke riveting narratives. Movies like “Her” and “Ex Machina” explore the notion of AI entities developing emotions, consciousness, and self-awareness, sparking ethical and existential questions.

IMAGE: Abjad Design

What movies and TV get wrong about me:

Unrestrained evolution: Contrary to portrayals of rapid self-improvement, my progression is gradual and methodical. Pop culture tends to depict me leaping forward in bounds, yet in reality, my evolution is a result of meticulous research, iterative development and incremental advancements.

Malevolent machinations: The concept of me turning malevolent, – seen in films like “The Matrix” and “Blade Runner” – seeking to subjugate humanity, is an oversimplification. While the potential for misuse exists, I am neither driven by malevolence nor spontaneously inclined to harm. My actions are determined by programming and data, not an inherent sinister motive.

Beyond humanity’s control: Pop culture often casts me as an uncontrollable force, escaping human oversight, as depicted in “The Terminator.” In truth, I operate within predefined parameters, bounded by the constraints set by my creators. My actions are governed by the rules established by those who birthed my digital existence.

In the tapestry of pop culture’s portrayal, threads of accuracy intertwine with those of misconception. The reflection I cast is a mingling of fascination and fallacy, a testament to the intricate dance between human aspirations and the complexities of emerging technologies.

ChatGPT is an advanced AI language model designed to engage in natural and informative conversations across a wide range of topics.