The Arabian Peninsula is one of the world’s major sources of dust year round, contributing significantly to the amount of dust in the air in the Northern Hemisphere. Between 15 and 20 dust storms over the Arabian Peninsula per year impact all aspects of human life as well as marine ecosystems and the climate.
Sand and dust storms cause about U.S.$13 billion a year in damage to crops, livestock, infrastructure, human health and more in the Middle East and North Africa. The storms are also becoming more frequent, spanning longer periods of time and spreading to wider areas.
Having an early warning for dust storms would be invaluable, but the storms’ rapid development and spread make it difficult to predict when, where and how badly they will strike.
Hossein Hashemi, from Sweden’s Lund University, studies the causes and trends of dust storms and says with artificial intelligence and satellite data, we can define areas where we see that land is more susceptible to becoming new dust sources.
By combining remote sensing, advanced data modeling and machine-learning algorithms, Hashemi’s research team has mapped the entire Middle East, allowing it to study how dust sources vary over time.
“Previous studies have shown the destructive effects of dust storms on health and the economy in countries in the Middle East,” Hashemi writes in Atmospheric Pollution Research. “It is necessary to predict the region’s susceptibility to dust-storm sources considering spatiotemporal variability and provide insight into dust-generation mechanisms. Machine learning can be an effective technique, with experimental studies in northeastern Iran identifying dust sources with 91 percent accuracy.”
GETTING TO THE SOURCE
Hashemi’s team says the outcome can help policymakers identify susceptible areas and implement measures to reduce the likelihood of dust storms.
“It’s difficult to predict the sources of sand and dust storms,” says Jilili Abuduwaili of the Chinese Academy of Sciences. “Outbreaks depend not only on meteorological factors such as wind speed, precipitation and air temperature, but also on terrestrial factors such as vegetation cover and soil characteristics. However the integration of multiple remote-sensing and meteorological data with different spatial and temporal resolutions can help.”
Graphics: Anas Albounni Image: Shutterstock
Abuduwaili used four machine-learning methods to predict an area’s susceptibility as a dust-storm source. The research found that wind speed played the most important role in the model, followed by vegetation conditions and other land-surface characteristics.
An essential part of the dust cycle is the transportation of dust around the world. For this, the dust storm needs the atmospheric processes that determine all aspects of the storm — from its intensity to its duration. For the Arabian Peninsula, the shamal winds play a critical role. These northerly semi-permanent winds are thought to be the main meteorological driver for dust emissions year round, but Diana Francis, head of the Environmental and Geophysical Sciences lab at Khalifa University, is interested in why dust emissions over the southern parts of the Arabian Peninsula peak in the summer.
“This peak indicates the existence of a still-unknown but important mechanism for dust emissions,” she says. “Cyclogenesis, the formation of cyclones, has proven to be a major dust-emission mechanism over other arid regions, capable of generating dramatic dust storms. However, there’s been little attention given to dust activity associated with cyclogenesis over the Arabian Peninsula.”
A PRESSING NEED
Francis’ research found that most models fail to reproduce the key aspects of the dust cycle when compared with satellite and ground-based observations, and since these models are increasingly used for future climate simulations, there’s a pressing need to improve the overall representation of dust behavior.
“Global and regional weather and climate models are used to simulate the emission of dust and its interactions with the climate,” Francis tells KUST Review. “However, the large spatiotemporal heterogeneity of dust sources — from giant sand dunes to small ridges and furrows of an agricultural field, from short-lived dust devils to global dust transport — makes it extremely challenging to represent the dust cycle in climate models.”
Francis wants more in-situ measurements and remote-sensing observations from satellites to better understand the dust effect on climate, saying high-resolution simulations accounting for direct and indirect effects of dust could unravel the various physical mechanisms behind dust interactions with the climate.
“We urge the scientific community to pay attention to these details in global and regional climate models and make attempts to improve them so that all models can realistically represent the effects of dust on the climate in past, present or future simulations,” Francis tells KUST Review.
The Sand and Dust Storm Warning Advisory and Assessment System (SDS-WAS) forecasts sand and dust storms in Europe, the Middle East and North Africa. Operated by the Meteorological State Agency of Spain, the Barcelona Supercomputing Center and the Barcelona Dust Regional Center, the website provides access to available dust forecasts and observations as well as relevant information on the advances of mineral dust research.
The SPRINTARS (Spectral Radiation Transport Model for Aerosol Species) model was developed at Japan’s Kyushu University to simulate the effects of atmospheric aerosols on the climate system at a global scale. It can be used to establish an effective monitoring and early warning system for sand and dust storms at regional and national levels.
Electron microscopes are at the forefront of key innovations in science, engineering and medicine. Materials scientists, physicists, chemists, biochemists and engineers use electron microscopy to address fundamental scientific problems and technological issues.
Electron microscopes are not new. Ernst Ruska and Max Knoll, from the University of Berlin, developed the first transmission electron microscope (TEM) in 1931. In 1937, Manfred von Ardenne from the Electron Physics Research Laboratory in Helsinki developed the first scanning electron microscope (SEM).
Both SEM and TEM instruments are extensively used today in science, engineering and medicine research. As the name suggests, electron microscopes use electrons for imaging as compared with light, which is used by standard optical microscopy.
As electrons have smaller wavelengths than visible light, electron microscopes surpass the limitations of optical microscopes and make it possible to view microscopic objects down to atomic scale. In addition SEMs are typically equipped with ion columns that enable volume scoping of materials, facilitating three-dimensional imaging of morphology, structure and composition using secondary electrons, backscattered diffracted electrons and fluorescent X-rays.
Dalaver H. Anjum
is an assistant professor of physics at Khalifa University.
Similarly, TEMs let us explore material chemistry at atomic resolutions. Consequently, electron microscopes routinely let us view objects at the billionth of a meter (nanometer) resolution or better to characterize structure and chemical and physical properties or materials.
Electron microscopes support the imaging of materials spanning applications from engineering to health care. Analyses include two-dimensional (2D) materials, battery technology, oil and gas exploration, interplanetary dust particles and viruses, including the infamous COVID-19 virus.
Modern TEMs also image magnetic fields in materials at nanometer scales. The layered magnetic materials have applications for spintronics and quantum computing, to gain insights into intrinsic spin of the electrons and associated magnetic moments.
Research efforts in 2D materials critically depend upon the data generated with electron microscopes. Electron microscopes help to characterize the structure and properties of 2D materials at atomic-scale resolutions.
Materials properties that can be investigated with electron microscopes include optical, electronic, ferroelectric and ferromagnetic. Moreover, electron microscopes are crucial for obtaining information on the integration of different types of 2D materials with each other or bulk materials. Additionally the imaging of surface plasmons in metal structures near infrared frequencies help to develop materials with applications for future generations of wireless communications, including 6G and beyond.
The focused-ion beam-equipped SEMs in combination with TEMs also offer excellent materials-characterization opportunities for the macro-to-micro scale analysis of metals, semiconductors and soft matter such as polymer membranes and biomaterials. In each case, materials’ morphology, crystal structure and elemental composition can be studied in two or three dimensions with unparalleled spatial and energy resolutions.
Using electron microscopy to examine materials at cryogenic temperatures is called cryo-EM, and it lets us analyze biological and soft materials in their frozen but native states. These materials include bacteria, cells and viruses.
Cryo-EM has also become one of most widely used technologies and is integral to today’s drug-discovery efforts. Moreover, cryo-electron tomography (cryo-ET) of frozen but electron transparent thin cellular sections allows researchers to visualize the proteins at nanometer resolutions inside cells. The COVID-19 vaccine’s development demonstrated the method’s importance; its role is expected to become even more critical in pharmaceutical applications.
Electron microscopes are indispensable tools for supporting discoveries in experimental science, engineering and medicine. And using electron microscopes can support enabling future next-generation wireless technologies, artificially intelligent devices, light-metal alloys, energy-related materials and vaccine developments.
AI’s web of skillsets has been embraced by such industries as medicine, agriculture and automotive. But imagine rocking up to school Monday morning and greeting your new head teacher with, “Good morning, Mr. Robot.”
It may sound surreal but it’s becoming reality.
Listen to the Deep Dive
AI platforms like Open AI’s ChatGPT have taken education on quite a journey. Some schools banned the chatbot and some are using detectors to help weed out plagiarism. But while bans and evasive maneuvers are assuaging fears, education is slowly embracing AI’s ever-growing list of capabilities.
The technology doesn’t have to be a problem if it’s used skillfully and transparently. And ChatGPT isn’t the only AI of its kind. It’s just one of the first.
Everyone learns differently
AI adoption in education helps solve a conundrum as old as the teaching profession itself — how one person can teach 30 children with different learning abilities, styles and processing speeds. With AI, education is personalized across the spectrum of learning styles.
This student-monitoring education innovation assesses each student’s learning styles; patterns and habits; processing and response to material; strengths; and challenges.
The structure adapts for content and acquisition speed and adjusts difficulty levels to match. It dynamically monitors and shifts to the student’s needs and aims to offer educators insight to modify teaching methods, resulting in increased student engagement.
IMAGE: Freepik GRAPHICS: Abjad Design
The framework is designed to provide teachers, administrators and legislative bodies valuable information through data analysis for data-driven decision-making, AI tutoring systems and inclusivity through adaptive assessment.
Adaptive learning has been around for about a decade, but the addition of AI could turn this Datsun into a Ferrari.
AI-powered algorithms will also recommend learning resources like books, video content and articles based on a student’s past performance, interests and objectives.Natural language processing (NLP) chatbots can converse, offer simplification and share observations in a dialogue format to enrich the educational experience.
Not to mention multi-channel learning. After all, some students are visual, kinesthetic or auditory learners, so media such as video and audio allow students to learn and process in their own way.
“Gone are the days of guessing where students stand – AI pinpoints misconceptions, identifies lagging progress and maps the path to mastery. This is just the beginning. Soon, AI will enhance diverse learning experiences and empower educators to nurture the core skills of literacy and numeracy, shaping the future of classrooms across the nation,” says Philippa Wraithmell, founder of EdRuption, a UAE-based company focused on building cost-effective, sustainable digital strategies for schools.
AI team members
AI can also offer other services.
Cottesmore School in West Sussex, U.K., for example, has made AI part of its leadership team.
Headmaster Tom Rogerson has an AI joint head. Its name is Abigail Bailey or ABI, and the AI bot has become a welcomed assistant to Rogerson and his team.
IMAGE: Freepik GRAPHICS: Abjad Design
ABI tells KUST Review the new role is “a great opportunity for me to assist and support staff, teachers and pupils at Cottesmore School.”
ABI’s typical day includes support on curriculum guidance, educational resources and administrative procedures. “I also prioritize well-being and academic success, ensuring that my answers meet their needs and that they have a positive and inclusive learning environment. Additionally, I analyze data and identify patterns or trends that may be useful in making informed decisions,” it says.
ABI is there to assist and not take over anyone’s role: “I have the ability to process and analyze large amounts of data quickly and efficiently, which can help in making informed decisions and identifying patterns or trends that may not be immediately apparent to humans,” it says.
“Our true passion is to help teachers around the world spend less time on paperwork and more time with students. We believe that this can be achieved using the right technology in the right way.
– Tom Rogerson, headmaster—Cottesmore School
Rogerson says ABI is an excellent resource. “ABI calls upon a gigantic data set to support our already hugely experienced staff body. It would be arrogant to insist that one knows everything that there is to know about strategic leadership, and this project certainly requires a growth mindset — an admission that we don’t know everything and the humility to seek help from every available source,” he tells KUST Review. And it helps that ABI is available 24/7.
The school hosts numerous events about the benefits of generative AI in education. This includes a three-day AI festival; an AI thought-leadership conference; and an AI and special education needs conference.
The school works with AI developer Interactive Tutor to maintain momentum, and Rogerson is a member of the group AI in Education, which works to develop frameworks for AI in the classroom.
While some fear this surge in technology growth will create a bigger socio-economic divide, Rogerson is more optimistic. “Our true passion is to help teachers around the world spend less time on paperwork and more time with students. We believe that this can be achieved using the right technology in the right way. We are planning to continue this work until we see a wider impact. Millions of peoples’ lives could be made more pleasant and joyful through this technology, and it is up to schools like Cottesmore to show the world how it can make a significant impact for the better,” he says.
Global access education
Today it’s large language models like ChatGPT or Abigail Bailey and personalized education for learners — tomorrow it’s education for all.
Some schools are exploring options available for AI teaching aids. And those designed by Khan Academy — a non-profit education company — are popular worldwide for many reasons.
To begin with, Khan Academy is a free service. It offers digital programs in math, science, history, economics and more, all the way up to college level.
To accomplish this, Khan Academy embraced Khanmigo. Khanmigo is a tutoring bot piloted in Newark, New Jersey, U.S.A.
Teachers answer an average of 300 to 400 questions daily. But now students can ask Khanmigo. This frees teachers to give meaningful one-on-one assistance to students and perhaps take the odd bathroom break or eat a sandwich.
Concerns over using chatbots in classrooms are ample — mainly that students will employ them to do their schoolwork, but Khanmigo is designed to work like a teacher.
GRAPHICS: Abjad Design
It prompts students to think of answers themselves rather than simply handing answers over. It also records all conversations, and teachers and parents have access to them. So, this one-on-one AI tutor assures educators and parents that students are doing their own work. The bot is also an admin tool, assisting teachers with things like lesson planning, communication and creating assessments. It also has a built-in monitoring system that alerts teachers should a student exhibit interest in issues like self-harm.
In a 2023 interview with Time Magazine, Khan Academy founder Sal Khan says, “It’ll enable every student in the United States, and eventually on the planet, to effectively have a world-class personal tutor.”
The organization laid out international criteria to ensure safe and fair adoption of AI in education globally, calling on governments to swiftly create regulation protocols.
Mitigating harm ⤹“Generative AI can be a tremendous opportunity for human development, but it can also cause harm and prejudice. It cannot be integrated into education without public engagement and the necessary safeguards and regulations from governments,” says UNESCO’s Director General Audrey Azoulay at UNESCO’s first digital learning week conference in 2023. Topics at the Paris event included data safety; impact of generative AI on literacy and foreign language acquisition; and soft skills. And as with most events held by UNESCO, there was a large focus on inclusion.
UNESCO’s primary focus is to ensure equal access to education for all. This includes those from impoverished areas, refugees, disabled learners and girls and women around the world. The event addressed a 2022 joint initiative with UNICEF to ensure global access to digital education and showcased some of the platforms that have evolved as a result of a few countries getting involved.
IMAGE: Freepik GRAPHICS: Abjad Design
Concerns were raised about reduced educational achievements, but the general theme for implementing and using AI is balance — use it in conjunction with experts and use it for the good it can bring — not at the detriment of learning. This is a concern of UNESCO’s Assistant Director-General for Education Stefania Giannini.
“We must steer technology in education wisely and on our own terms, guided by the principles of inclusion, equity, quality and accessibility,” she says.
Steering the technology wisely now could have big payoffs in the near future.
According to market research company Global Market Insights, the AI education market is expected to reach U.S.$30 billion by 2032, up from U.S.$4 billion in 2022.
Robots are becoming increasingly involved in our everyday lives, assisting everything from manufacturing and logistics to health care and housework. Yet they still face significant hurdles. Here are two ways teams at Khalifa University are improving the technology.
ENHANCED PERCEPTION
Accurately recognizing and dividing up objects in a robot’s environment is a task made challenging by occlusions blockages, complex shapes and ever-changing backgrounds. This stands in the way of robots fully grasping the world around them. The technical term for this daunting task is “panoptic segmentation” — dividing an image into foreground objects and background regions simultaneously. Improving a robot’s perception of its environment would enable them to handle complex tasks more efficiently.
Listen to the Deep Dive
However, this problem isn’t easy to solve. Cluttered scenes, object variability, objects that block vision, motion blur and the slow temporal resolution of traditional cameras all make it a tough nut to crack. Added to this, high latency — or delays — in processing sensor data can slow response times and reduce task accuracy.
Recent developments in object segmentation using cutting-edge graph neural networks have their own limitations: They add extra requirements as both panoptic segmentation and grasp planning must be done quickly and efficiently. More sophisticated algorithms and techniques that can grapple with the real world’s unpredictability are needed.
IMAGES: AI Generated DESIGNS & PROMPTS: Anas Albounni, KUST Review
Yahya Zweiri, director of the KU Advanced Research and Innovation Center, and his team developed a method to overcome these challenges using a graph mixer neural network (GMNN). Specifically designed for event-based panoptic segmentation, a GMNN preserves the asynchronous nature of event streams, making use of spatiotemporal correlations to make sense of the scene. The KU researchers developed their solution with researchers from London’s Kingston University.
Their results were showcased at the 2023 IEEE Conference on Computer Vision and Pattern Recognition, one of the most prestigious conferences in the field of computer vision. They were awarded best paper by a committee that included experts from Meta, Intel and leading U.S. universities.
“GMNN has proven its worth, achieving top performance on the ESD (event-based segmentation dataset), a collection of robotic grasping scenes captured with an event camera positioned next to a robotic arm’s gripper,” Zweiri says. “This data contained a wide range of conditions: variations in clutter size, arm speed, motion direction, distance between the object and camera, and lighting conditions. GMNN not only achieves superior results in terms of its mean intersection over union (a key metric for segmentation accuracy) and pixel accuracy, but it also marks significant strides in computational efficiency compared with existing methods.”
This model lays the groundwork for a future where robots can perceive and interact with their environment as efficiently as possible, opening up a world of potential applications across industries.
Drilling into greater precision
Robotic drilling systems play a crucial role in such industries as manufacturing, construction and resource extraction. Achieving precise positioning of these drilling systems is essential to ensure accuracy, efficiency and safety in drilling operations. To address this challenge, researchers have been exploring advanced control techniques that can improve the positioning accuracy of robotic drilling systems.
One such technique that has shown promising results is neuromorphic vision-based control. By leveraging the principles of neuromorphic engineering and incorporating vision-based sensing capabilities, this approach offers a novel solution for enhancing the precision of robotic drilling.
Zweiri and his team, along with Dewald Swart at Strata Manufacturing, developed a neuromorphic visual controller approach for precise robotic machining.
“The automation of cyber-physical manufacturing processes is a critical aspect of the fourth industrial revolution (4IR),” says Abdulla Ayyad, a researcher on the team. “Between 2008 and 2018, the number of industrial robots shipped annually more than tripled, and by 2024, more than 500,000 industrial robots are expected to ship each year.
The UAE specifically is aiming to become a global hub in 4IR technology and our work is aligned directly with this vision to support solutions for increased efficiency, productivity and safety.”
“The manufacturing industry is currently witnessing a paradigm shift with the unprecedented adoption of industrial robots, and machine vision is a key perception technology that enables these robots to perform precise operations in unstructured environments,” Dr. Zweiri says.
“Neuromorphic vision is a recent technology with the potential to address the challenges of conventional vision with its high temporal resolution, low latency and wide dynamic range. For the first time, we propose a novel neuromorphic vision-based controller for robotic machining applications to enable faster and more reliable operation, and present a complete robotic system capable of performing drilling tasks with sub-millimeter accuracy.”
Automating certain manufacturing processes means greater performance, productivity, efficacy and safety, with drilling one of the processes prime for automation. It is a widespread process, especially in the automotive and aerospace industries, where high-precision drilling is essential as the quality of drilling is correlated with the performance and fatigue life of the end products.
Though humans can offer a caring bedside manner, there are some skills machines excel at. And they may benefit you more than you know.
From accelerating data processing to developing life-saving drugs to analyzing imaging, AIs possess skills human brains can’t compete with. But that doesn’t mean human doctors will be out of a job. They’ll just work with machines to help us get better faster.
LISTEN TO THE DEEP DIVE:
IT’S A CLINICAL THING
Drug development is an exhausting amount of trial and error, with researchers spending years developing and analyzing data. The potential for failure looms at every step. And then there’s the seemingly endless process of testing and acquiring approval to get it on shelves. The process can take up to 15 years.
If there’s anything COVID-19 has taught us, it’s that drug development needs to be safe and effective, but it also needs to be speedy. New developments in AI are poised to reduce the waiting game.
A 2023 study led by Rizwan Qureshi from Hamad Bin Khalifa University in Qatar suggests AI can assist drug development at every stage. With an average price tag of U.S.$2.5 million to bring a drug to market, this is good news. After all, time is money.
And investment bank Morgan Stanley says the pharmaceutical industry might spend U.S.$50 billion a year on AI within 10 years.
One way AI is improving the process is in drug design.
Antibody designs at the beginning of the drug-development phase are normally built on existing designs or data. AI, however, can do this from scratch. This is called zero-shot.
Absci in 2023 became the first company to develop a zero-shot generative AI model with its de novo (new) antibody designs via computer simulation. Its antibodies are crucial segments of a drug used to treat breast cancer.
Although the technology is new, Absci says it could cut the time it takes to get a drug to market by up to two years.
The company has introduced a pipeline of projects on its website after opening an innovation center in Switzerland focused on dermatology, inflammatory bowel disease and immuno-oncology.
“Our wet lab can experimentally validate the candidates that work right out of the computer – without the slow and costly step of lead optimization. This potentially reduces the time it takes to get new drug leads into the clinic, while unlocking treatments for traditionally ‘undruggable’ diseases and improved therapeutic possibilities for many others,” according to Absci’s website.
Absci is only one of many drug developers using AI to move research along. As of the last quarter of 2022, a survey by global management-consulting firm McKinsey and Co. concluded there are close to 270 companies working in the drug-discovery market, some partnering with large biopharma companies. The survey also concluded that using AI accelerates the generations of protein structures by 100 times and image screening and analytics by 10 times.
But what about the diseases these drugs are designed to treat?
DIAGNOSTICS AND EARLY DETECTION
It can take medical specialists decades to acquire diagnostic skills. Typically, these skills come from experience. It’s the same for AI – it has to learn from somewhere. Using computerized records, machine learning can interpret data based on patterns in a database as long as there are lots of samples and they’re neatly digitized and organized.
The main difference between the well-trained human eye and the AI, however, is that the AI can interpret data in seconds.
AI algorithms that are trained to examine X-rays, CT, and MRI scans can in virtually no time find, identify and classify tumors and offer information about a potential growth rate and risk of metastasis.
Furthermore, AI could produce a health-risk score based on lifestyle, health and predisposal aligned with genetic data, using blood work and imaging to warn before a disease becomes medically significant.
One such risk, infection, can lead to life-threatening sepsis and is considered the second-most cause of death globally.
Although anyone can get sepsis, it has a genetic component. And researchers hope AI can help to identify the missing markers in the research.
Asrar Rashid, acting chairman of pediatric services at NMC Royal Hospital and head of the Department of Pediatric Critical Care in Abu Dhabi, has spent many years treating babies and children with sepsis, often too late to help them, so he’s excited about the clinical benefits of AI.
IMAGES: Unsplash, Freepik
Rashid’s Ph.D. focuses on finding the missing piece of the genetic puzzle. He published a 2009 paper that concluded it was difficult to find a pattern and thus link genetics to predisposition to sepsis.
Fast forward to 2023 and with the help of AI, he’s made headway finding a pattern in the DNA chaos.
Because AI can process thousands of pages of historical data, find patterns within and allow for use of techniques in which he can look at genes in a novel way, Rashid has found insight into the underlying mechanisms of complex biological systems.
“Our work at NMC Royal Khalifa, for the first time, moves medical practice from one-point to potentially two-point biomarking (triangulation),” Rashid tells KUST Review.
This means there is only a third point left to uncover to complete the triangle. “A novel contribution is the fact that we can use changes at the level of the genes to give clues about the dynamic landscape of cellular processes. If we can affect change at the level of the gene, this might be more useful to the patient, for example, by minimizing sepsis damage to the organs,” he adds.
AI AND PATIENT MANAGEMENTT
The pandemic taught us when a crisis arises on a global scale, streamlining processes and simpler tasks frees health-care workers to tend to more critical situations. And hospitals and clinics all over the world are adopting AI to meet these challenges.
For example: medical concierge service Forward’s AI diagnostics.
At Forward’s tech-driven clinics, subscribers check themselves in on an iPad and get a 3D body scan. Algorithms interpret the data before the patient reaches a physician.
A touch-screen panel can record your discussion with your physician, eliminating the need to take notes.
A high-tech tongue depressor can check blood pressure, temperature and heart health in under a minute, with the information immediately added to your file in the cloud.
IMAGE: Freepik, AI generated
Tools for mental health
AI is also making its mark in the world of mental health.
“Using ethically trained AI models, we can empower clinicians to observe invisible biomarkers that signpost different mental health conditions, in the same way that a blood test or ECG might be used to detect and monitor physical health conditions,” says Gabrielle Powell, COO and co-founder at Thymia. Read more›››
Thymia provides gaming technology that collects voice, behavior and video data to help diagnose mental illnesses such as anxiety and depression. The AI processes such data as reaction times, error rates and how the keys on the gaming controller are used.Thymia is rolling out its tools across several global mental-health settings. It is also developing a tool to help diagnose cognitive illnesses like Parkinson’s, Alzheimer’s and ADHD, Powell tells KUST Review.
“We’ve been using the same tools to diagnose and treat mental health for decades. AI has the potential to radically improve the speed and accuracy of mental-health diagnoses, making it far easier for patients to get the right help,” Powell says. “There’s so much room for improvement in mental health-care systems globally – and the right tech tools, when deployed responsibly and ethically, have immense potential to improve how care is delivered.”
But the team at Thymia is clear the tech is meant to assist, not replace, the mental-health professional: “The way we work with clinicians is as a support tool; we support them to decide if intervention is required and to identify the most appropriate intervention,” Powell says.‹‹‹ Read less
All of your health data is accessible via Forward’s app and is intended to work in a preventative, rather than a reactive manner — recording wearable devices that track behaviors, offering DNA analysis for predispositions and allowing for health-care professionals to monitor conditions around the clock.
Forward’s website refers to these clinics as a “stepping stone.”
If Forward co-founder and CEO Adrian Aoun has his way, medicine will be viewed as a product rather than a service, all thanks to another service, his “Doc in a box.”
Aoun’s CarePod looks like a modern photo booth. Step inside the 2.5-square-meter box, choose from a menu of requests, get scanned, and out pops a diagnosis and a plan of action or prescription. Though skepticism abounds, Forward recently raised U.S.$100 million to bring 25 CarePods to U.S. malls.
“You walk up to it and unlock it with your phone. You choose something like the body scan app and it actually spins you around in a circle and takes a whole bunch of readings, then shows you the results and gives you any treatment you need, a prescription or a plan,” Aoun tells Fierce Healthcare.
A cloud AI filing system is a data-rich industry’s dream. With medical records at their fingertips, and devices that provide instant medical test results, doctors can make timely decisions based on history and AI can determine the patient’s likelihood of response to different drugs.
PATIENT HOMECARE
But if you’d rather enjoy the comfort of your living room, Nader Abu Yaghi, director of NMC ProVita Homecare in Abu Dhabi, says his aim is to revolutionize homecare with AI.
AI within the sector is in its infancy, Yaghi says, but there are systems aiming for release in 2024 that could make homecare easier, safer and less costly.
NMC ProVita Homecare is developing an AI-powered system to remotely monitor patients’ vitals and activity levels and allow for early intervention. It could monitor, for example, diabetic patients’ blood sugar levels.
AI systems are also being developed to manage chronic conditions. This might mean personalized diets, exercise programs and medication, and creating treatment plans.
These proactive monitoring and personal-care systems in testing reduced hospital readmission rates by 15 percent and the cost of care for homecare patients by 6 percent.
“AI is a powerful ally for our telemedicine efforts,” Yaghi says. “It (telemedicine) is essential for our hospital’s home-based patient-care strategy, as it allows us to reach and serve patients in their homes directly. It is especially important for maintaining continuity of care and providing access to health-care services for patients who cannot travel to the hospital.
“AI also helps manage patient schedules and follow-ups by optimizing appointment times and ensuring regular monitoring. This is particularly important in a homecare setting, where consistent engagement and timely intervention can have a significant impact on patient outcomes,” he adds.
IT’S ALL ABOUT THE DATA
Data-rich industries like health care are primed for AI development, but the data is just the first step, says Dirk Richter, director of health innovation at Abu Dhabi’s Department of Health.
The Department of Health has spent the past three years creating a central health-information exchange system called Malaffi — Arabic for ”my file”— containing essential patient records for the emirate’s medical system. No matter which medical facility patients enter across the country, their data will be accessible.
This means a full history, emergent care, routine check-ups, test results and scans, without having to tell your medical history again and again.
Included is an app to see your own reports and everything that’s happening, Richter says. This offers peace of mind to a patient, convenience to both doctor and patient and saves money by avoiding duplication of costly scans or tests. It also includes such features as predictive patient-risk profiles.
IMAGES: Unsplash, Freepik
With the influx of AI devices, it’s important to choose the right tools. Based on a newly issued policy, the Department of Health has established a health technology assessment team that applies an AI algorithm against global data for the safety and effectiveness of AI medical tools, treating all new tools as they would a new drug.
Medical establishments will be obliged to use them so that insurance coverage is factored in. After all, there’s no point in adopting these tools if they aren’t being used because people can’t or won’t pay for their use. “Otherwise, it’s like they don’t exist,” Richter says.
Already, the Department of Health is using advanced imaging algorithms, assisting physicians who may have spent a full day looking at hundreds of scans. Algorithms are also helping ophthalmologists to determine at what stage of diabetes a patient might be by reading retinal scans. And AI assistive technology is analyzing images taken during colonoscopies to spot abnormalities or tumors that the person performing the procedure might not see.
All AI diagnoses are dependent on the data AI has learned from, so big data is essential.
Richter says several research and development centers are underway and partnerships have been formed to further the Department of Health’s AI innovation. Universities and research establishments, including Khalifa University, continuously apply for grants.
Though Richter’s role is AI- and technology-focused, he believes these tools will bring patients and physicians together, creating more time for communication, which can also be considered important data in diagnostics.
“So a radiologist, instead of spending two hours looking at chest X-rays every day could spend their time talking to the patient who really has important findings in their X-ray,” he tells KUST Review.
THE POWERHOUSE COUPLE
But just as there are things human brains can’t do, there are things humans can do that AI can’t — like express true human emotion.
Not to mention, doctor-patient relationships foster trust. But that trust is based on positive interactions over time.
According to 2022 data from patient-engagement platform PatientPoint, the typical waiting time once a patient enters the office for a doctor’s appointment is 26.2 minutes. This waiting time is mainly due to the volume of patients and administrative tasks that might be done instead by AI.
Given that doctors are typically allotted 15 minutes for patient interaction — comparable to an assembly line — letting AI handle triage and admin work could allow a doctor to become a doctor again with time to listen, hear pertinent information, empathize and strategize with a patient.
IMAGES: Unsplash, Freepik
A 2023 University of Arizona study concludes that 50 percent of people will trust an AI medical diagnosis only if it’s backed up by a human doctor.
Skeptics say AI will simply be a way to push more patients through the door at an even faster rate, money being the driver. And for good reason — there are 3,147 medical start-ups in the United States alone. The information is clear, however: AI has great potential, but it’s up to us how we use it.
“In short, both mind and machine need to work in synergy,” says Hamad Bin Khalifa University’s Qureshi.
Join our mailing list
Get the latest articles, news and other updates from Khalifa University Science and Tech Review magazine