Welcome to Industry 4.0, considered by many experts to be the fourth industrial revolution. Artificial intelligence and data analytics are a big part of it and are already changing how supply chains work. Here are just some of the ways they make getting a product from the manufacturer to your home cheaper and more efficient.
IN THE FACTORY
Generative design: An algorithm receives design parameters (such as cost and information on available materials) and generates thousands of options to find the best one.
LISTEN TO THE DEEP DIVE
Order management: AIs handle complicated order information from multiple channels.
Quality control: Sensors inspect products for defects.
Predictive maintenance: AI monitors systems and machines for early signs something is about to break down, preventing expensive factory shutdowns.
Compliance management: AI manages the red tape when the same product is sold in different markets with different regulations.
Customization: AI may be used to create such customized orders as bespoke suits and made-to-order shoes. And in a process called “reshoring” or “nearshoring,” products made far away can be customized closer to the sale point at the last minute.
IN THE WAREHOUSE
Stocking: Digital cameras monitor inventory levels and AI robots pick, sort and pack products.
Finding damaged packages: Machine learning models scan and analyze images to spot damaged objects.
Helping workers with wearable technology: Smart glasses “read” barcodes. Natural language processing helps humans work hands-free to pick items more safely.
THROUGHOUT THE PROCESS
Supply chain visibility: Internet of Things (IoT) devices provide instant information about such conditions as the location and temperature of shipments. Businesses can spot bottlenecks, manage disruptions in real time and make data-driven decisions.
Collaborative supply chains: Multiple companies use data and analytics to work together to plan and execute supply chain operations. The cooperative approach allows the companies to serve similar customers or achieve a common goal.
DELIVERIES
Optimal routes: Vehicle routing algorithms (without problems) use such factors as capacity, delivery priorities and time windows to plot the most efficient routes.
Real-time conditions: AI can monitor weather, traffic and other conditions to reroute as necessary.
Autonomous vehicles: Truck platooning technology can permit a group of vehicles to operate extremely closely, reducing wind resistance and decreasing fuel consumption for transportation between factory and warehouse or retailer. Smaller vehicles will be used for deliveries. Algorithms optimize routes while AI helps vehicles avoid collisions.
My office isn’t the most inspiring. It’s not bad, per se, but it’s not the peaceful lakeshore cabin conducive to creative thought and productivity that I’d like. If only it were socially acceptable to don my virtual reality (VR) headset and immerse myself in a futuristic cityscape or tropical haven and get all my work done. I want to pretend I’m floating among the stars on a spacecraft while replying to emails and writing my stories.
LISTEN TO THE DEEP DIVE
Jamie Gilpin, CMO at social media management tool Sprout Social, tells me that what I actually want is the metaverse. Sprout is one of many companies that have transitioned to a remote-first approach for its workers.
“Going to work in the metaverse may sound far-fetched but it may hold the answer to engaging workers in a virtual workspace. If your dream workspace is a beach, you might run into issues with sand getting into your keyboard,” she says. “The metaverse makes it possible to work wherever you want, without the limitations of the space. Allowing yourself to work in the environment where you feel most productive can yield incredible results.”
I had been thinking of VR, plain and simple. Is Gilpin right in saying the metaverse is the answer? What even is the metaverse?
As analysts for McKinsey and Co. wrote in a 2022 think piece, “if you’ve ever done a Google search for the term ‘metaverse,’ you’re not alone.”
Who hasn’t heard of the metaverse?
“The metaverse [was] the buzzword of 2022 in the same way that NFT was the buzzword of 2021,” says QuHarrison Terry, author of “The Metaverse Handbook: Innovating for the Internet’s Next Tectonic Shift.” “The metaverse is a fictional place imagined long before our current consumer-tech obsessions that has manifested into real progress. While the metaverse is far from a finished destination, there are thousands of people building it every second of every day.”
Herbert B. Dixon Jr. retired from the Washington, D.C., Superior Court in 2014. Before his retirement, he was overseeing the U.S. courthouse’s most modern prototype courtroom: high-def TV screens and all. Now, he’s a regular contributor to the American Bar Association’s Judges’ Journal and wrote in 2023: “The metaverse is a rapidly evolving idea. Describing the metaverse in 2023 is akin to explaining air or space travel to residents of the horse and buggy era. Every year, we see new technological advancements that a decade before would have seemed like science fiction.
“The metaverse makes it possible to work wherever you want, without the limitations of the space.”
— Jamie Gilpin, CMO at Sprout Social
“The metaverse has been referred to as the three-dimensional internet and the future of the internet. My description of the future metaverse involves a digital universe (which may be real-world or imagined images) that your avatar enters to interact with other avatars.”
I don’t necessarily want an online representation of myself; I just want to pretend I’m working somewhere inspiring and quiet. But should I want to remain in my beautiful digital workspace, I’ll need an avatar to collaborate with my colleagues. They need a visible object in their digital environment that they can call “Jade” and I’ll need their avatars too. Yes, OK, online meeting platforms exist and I can change my background there and pretend I’m somewhere exotic but I want full immersion here.
Mariapina Trunfio, associate professor of economics and business management at the University of Naples, says the metaverse “defines a collective, persistent and interactive parallel reality created by synthesizing virtual worlds where people can use personal avatars to work, play and communicate with each other.”
In her 2022 Virtual Worlds paper, Trunfio explains that virtual technologies enhance the perceived immersion with the character realness of the avatars and residents: “Usually networked and situated with intelligent agents, they allow users to interact with virtual objects and intelligent agents freely, and to communicate with each other. In multiple forms, these worlds can be experienced synchronously and persistently by an unlimited number of users.”
I like the concepts of persistence and perceived immersion in Trunfio’s definition.
The McKinsey think piece also highlights that the metaverse means different things to different people:
“Some believe it’s a digital playground for friends. Others think it has the potential to be a commercial space for companies and customers. We believe both interpretations are correct. We believe the metaverse is best characterized as an evolution of today’s internet — something we are deeply immersed in, rather than something we primarily look at.”
In other words, as per the consultancy group’s working definition: “The metaverse is the emerging 3D-enabled digital space that uses virtual reality, augmented reality, and other advanced internet and semiconductor technology to allow people to have lifelike personal and business experiences online.”
ACCESS POINTS
To access the metaverse, says former-judge Dixon, the user needs “a computer programmed to access the computer-generated environment, a head-mounted visual display or goggles to see the virtual environment, an audio headset, and hand- and body-tracking, motion-detecting controllers and sensors to provide a sense of touch and feel while traveling within the environment.”
Ernesto Damiani is the senior director of the Robotics and Intelligent Systems Institute and director of the Center for Cyber Physical Systems at Khalifa University. His definition of the metaverse focuses the most on the technology needed to access the metaverse: “The metaverse is a digital, virtual space that humans wearing haptic interfaces (like helmets, gloves and visors) can enter and roam by projecting their presence as avatars. The metaverse puts together virtual reality, augmented reality and low-latency multi-party communication technology to allow people to have lifelike interactive experiences through their avatars.”
GRAPHICS: Abjad Design
I own a (VR) headset. I mostly use it for gaming. The virtual reality offers me that escape from the real world — again, picture my peaceful and inspiring work-environment goals. This total immersion isn’t the only feature of the metaverse though, and it’s not entirely practical for going about your everyday life. Enter augmented reality (AR).
Leslie Shannon likes the AR side of things. She authored “Interconnected Realities: How the Metaverse Will Transform Our Relationship With Technology Forever.” For her, the metaverse is a partly or fully digital experience that brings together people, places and information in real time in a way that transcends that which is possible in the physical world alone. She wants the metaverse to solve our problems — to be useful, not just entertaining.
“The problem is that smartphones and computers have done too well at solving the problem of delivering information and entertainment to us, exactly when and where we want it. To get this spectacular convenience, we’re prepared to pay a surprisingly high cost in terms of our connection to the people, places and things physically around us, and it’s a cost that we’re paying quite thoughtlessly today. You can probably name an incident in your own life within just the past week in which looking at a screen, rather than being present in your immediate surroundings, created a situation that caught you out socially, or made you neglect someone, or was even potentially dangerous. We’re all complicit in this one.”
How could an immersive digital world be the answer, Shannon asks. It’s not. But: “If we start thinking about a spectrum of experience, in which the far-left-hand side is 100 percent physical experiences, and the far-right-hand side is 100 percent digital experiences, then there also exists a middle point that is 50 percent physical and 50 percent digital, and sliding proportions of digital/physical mixes on either side of that middle point.”
Shannon says it’s the digital/physical mixes that deserve our attention. She calls this “interconnected realities.”
IMAGE: Abjad Design
Making the ‘metaversity’
By: Suzanne Condie Lambert
Khalifa University thinks the metaverse will be vital to the way students learn in the future. That’s why it teamed with Microsoft UAE and Hevolus Innovation for the 2023 Metaversity Hackathon, inviting student teams to create metaverse classrooms to remove physical barriers, making immersive, engaging and collaborative experiences inclusive and accessible. “One day we will have a university that is fully in the metaverse,” says Dr. Arif Sultan Al Hammadi, Khalifa’s executive vice president and KUST Review’s editor-in-chief. “Students will get the best education in the world wherever they are.” Read more›››
KU wants to be in the vanguard, and the hackathon, he adds, is a first step to getting there. Higher institutions would benefit as well, requiring fewer physical resources. Al Hammadi points to the example of medical school cadavers, which are expensive and may pose ethical concerns.
Schools are already using interactive 2D screens to reduce the number of cadavers required to teach anatomy, he says. A 3D metaverse could be the next leap forward. There are downsides, Al Hammadi says. Cheating is harder to detect. The physical experience of labs and experiments can’t yet be fully replicated. And distance learning doesn’t offer the same social life as on-campus classes.
But Al Hammadi says that as models improve, students will eventually be able to get much of the same experience in the metaverse. Hadi Otrok, a KU professor of electrical engineering and computer science, sees promise especially in using avatars to free instructors from small tasks, like running tutorials. “The challenge will be,” he says, “how to get the students … engaged with you instead of on the phone.”
It will take courage to take these ideas and create a fully interactive online experience, Al Hammadi says, suggesting that a potential “metaversity” could start with just one degree to prove the concept. And Khalifa University, he says, wants to be on the front end of imagining that future. ‹‹‹ Read less
“This concept of the metaverse is a world in which we can have the compelling, fascinating, relevant content that we currently access on screens, but integrated visually into our physical world in a way that enhances our lives, rather than removing us from them. This concept of the metaverse imagines the digital and physical aspects being incorporated with each other on a constantly sliding scale, so that sometimes we are fully immersed in a digital world, when that serves the purpose of the moment, but it is also possible to spend significant time fully immersed only in the physical world.
“This metaverse of interconnected realities will be a place where we combine digital information or entertainment from the world of the internet with our physical surroundings so that we can be more efficient, more informed, more delighted and more aware than we are today. A simple example of this enhanced future might be a sensor in my oven that connects with my AR glasses and, when the oven is on, displays its current temperature in a visual digital overlay when my gaze lingers on my oven for more than one or two seconds -– useful when I’m on the other side of the kitchen.”
Are we talking about a heads-up display (HUD) fixed permanently in my vision? I’d quite like that. I wear glasses anyway. It would be so helpful if people in real life had little tags above their heads to remind me of their names — facial recognition in VR land. Or a mini-map in the corner of my field of view so I’d never get lost again, video game-style.
After all, HUDs aren’t new. In aviation, they date to the end of the Second World War when rudimentary systems were installed in a few military aircraft.
The modern-day fighter pilot helmet boasts an impressive HUD, and Iron Man had one too. Granted, Iron Man belongs to the realm of fiction, but plenty of technology emerged from the minds of creators and novelists — including the term “metaverse.”
“The term ‘metaverse’ was coined by author Neal Stephenson in his 1992 novel ‘Snow Crash,’” says Matthew Ball, author of “Metaverse and How It Will Revolutionize Everything.” “For all its influence, Stephenson’s book provided no specific definition of the metaverse, but what he described was a persistent virtual world that reached, interacted with, and affected nearly every part of human existence.”
There’s that persistence again.
The “affecting nearly every part of human existence” thing I’m not so keen on.
EVERYONE EVERYWHERE ALL AT ONCE?
“The metaverse is a vast, immersive virtual world simultaneously accessible by millions of people through highly customizable avatars and powerful experience creation tools integrated with the offline world through its virtual economy and external technology,” Wagner James Au says in his book “Making a Metaverse That Matters: From Snow Crash and Second Life to a Virtual World Worth Fighting For.” He also says, however, that the metaverse is not for everyone:
“Chances are you’ve seen more than several tech evangelists across various media outlets insist that we’ll all soon be in the metaverse. I can tell you from painful — but also amusing — experience that this is unlikely ever to be the case. And, no, you probably won’t wear a VR headset on a regular basis either.
That said, it’s also safe to say at least one in four people with Internet connectivity will be part of the metaverse on some level. At a very conservative estimate, over half a billion people worldwide already use one or more variations of a metaverse platform now, from Minecraft and Roblox to Fortnite, VRChat and Second Life. That’s about 1 in 10 of the 5 billion people across the planet who use the internet.”
The majority of Au’s examples are games. Gaming companies are the pioneers in the metaverse space, well known as early adopters and prototype metaverse builders. Minecraft and Fortnite offer virtual worlds where players meet as avatars to play games and chat. They offer in-game payment systems and in-game assets that travel with players across platforms: from PC to console to mobile. They are also social spaces where gamers forge online relationships and communities.
IMAGE: Abjad Design
This gaming-world innovation correlates closely with many working definitions I found of the metaverse concept. Indeed, Ian Khan, author of “Metaverse for Dummies,” says the metaverse refers to virtual reality-based online worlds and notes that many of these worlds are gaming environments or online games. “Others function more as online virtual places where you can do other activities such as meet people, learn new things or simply hang out. And the types of virtual worlds you can find in the metaverse continue to expand and are likely to continue to evolve.”
Many of the experts I found, however, wouldn’t say we have a metaverse yet.
Dixon says the metaverse does not yet exist, but “its ultimate scope is constrained only be the limits of human imagination.”
Aakansha Saxena, assistant professor at the School of Information Technology, AI and Cyber Security, Rashtriya Raksha University, calls the metaverse a “concept”: “It can be understood as an infinite universe where communities of people can collaborate and enjoy the mechanisms of augmented reality, virtual reality, extended reality, online life and much more.”
That sounds like many of these games to me.
Khaled Salah, professor of electrical engineering and computer science at Khalifa University, throws a spanner in the works with his definition, saying: “A metaverse is an immersive and 3D virtual world in which people can interact through avatars to carry out their daily interactions, unlocking the potential to communicate, transact and experience new opportunities on a global scale.”
I’m struck by his use of article: “a metaverse” not “the metaverse.” Of all the people I asked, books I read and research articles I consulted, Salah was the only person to raise the question of multiple metaverses. Does each gaming platform or each individual game have its own metaverse?
And if each platform has its own, how can we move seamlessly between them all?
Maybe Mark Zuckerberg, CEO of Meta, has the answer. He said on the Lex Fridman Podcast that the metaverse is not a construct of connected virtual places:
“Instead, the metaverse is the point of time when we do and spend large portions of our everyday digital work and leisure in immersive 3D environments with VR and AR glasses.”
Meta, of course, used to be Facebook, and the company changed its name in 2021 to highlight its new direction. The company has since announced a U.S.$2.5 million investment supporting independent academic research across Europe into metaverse technologies because “since no one company will own and operate the metaverse, this will require collaboration and cooperation.”
Terry, author of “The Metaverse Handbook,” sums it up:
“Let me clear the air and first tell you what the metaverse is not. The metaverse is not a single technology. It’s not just a place we’ll visit in VR. It’s not something that can be created and claimed by the next Bezos or Gates. In fact, the metaverse is about as boundless and unownable as the internet, if not more so. Sure, there are entities that have contributed more to the internet than others. Of course, there are innovations that steered the course of the internet and influenced the experience of the web. But we didn’t wake up one day with the internet we see now. It was an ever-evolving thing.”
WHERE ARE WE GOING WITH THIS?
“The metaverse in the early 2020s is the equivalent of the mid-1990s in the development of the internet: Many people are talking about it, a few people are already building it, but no one can really define what it is, or what it will be able to do for us, or even if it will be relevant to anyone at all once it’s here,” Shannon says.
Khan, author of “Metaverse for Dummies,” agrees that in terms of development, the metaverse today is where the internet was in the 1990s:
“The early internet was shaped by new ideas, technologies and ways of doing things. With the right investments, adoption and usage, the internet grew into the internet we know today. Similarly, the metaverse today provides an interesting place for many activities, but many of them are still in the early days of development. The investment and attention put into building the metaverse over the next five to ten years will determine what the metaverse ultimately becomes and the value it creates.”
“The metaverse becomes more real every time we replace a physical habit with a digital equivalent.”
— QuHarrison Terry, author of The Metaverse Handbook: Innovating for the Internet’s Next Tectonic Shift
Per University of Naples’ Trunfio:
“The metaverse, like many innovations, is shrouded in mysticism and skepticism. If many believe it will be revolutionary and fully transform how people work, shop, socialize and play, others are skeptical, and see it as a fad. However, whether or not we think of the metaverse as a technological revolution, it is undeniable that the massive diffusion of this technology will impact on nearly all aspects of life and business in the next decade, allowing interaction in virtual and augmented spaces and a blend of both.”
Whether you’d say the metaverse is here already or well on its way, it’s clear that it’s the next big disruptor, the new place to be for all aspects of life.
After everything I’ve read and all the people I’ve spoken to, I think it’s funny that the definition of metaverse that resonates most with me is much more abstract than the very scientific approaches I’d usually turn to.
Shaan Puri, tech entrepreneur, posted a tweet in 2021 that sums it all up pretty nicely:
“The metaverse is the moment in time where our digital life is worth more to us than our physical life.”
Or as Terry puts it: “The metaverse is not just a place we’ll visit in VR. It is not a destination. The metaverse is a movement — a movement toward the digital-first livelihood we’ve slowly been adopting year over year, app by app. The metaverse becomes more real every time we replace a physical habit with a digital equivalent. We, the digital citizens of the internet, are manifesting the metaverse by trading time in the physical world for time online.”
While there have been many changes in the modern Olympic Games, two of the most notable are athletic performance and the rise of technology.
Back in the 1896 games, for example, stop watches marked the start and finish of a race and the timing of the althletes’ performances. This, however, has evolved over the years as technology changed.
In a sprint race, every millisecond counts, so even the starting gun now is electronic. The speakers connected to it are positioned such that no runner will hear the shot of the gun even a millisecond before another runner.
At the finish line, a laser is projected across to a light sensor, also called a photoelectric cell or electric eye, positioned to receive the beam. The system includes two photocells set at different heights to prevent false readings from arm movements. When a runner crosses the finish line and interrupts the beam, the electric eye triggers a signal to the timing console, recording the runner’s time.
In marathons, however, there are so many competitors; not everyone can start at the same time. Wearable timers called radio-frequency identification tags are essential.
In some events, athletes wear timing monitors that record split times as they pass, offering information that can assist in future training.
Training elite athletes has also evolved.
IMAGE: Unsplash
In the 1932 Olympic Games, the winner of the men’s 100-meter swim was clocked at 58.2 seconds. Fast forward to 2016 and the winner touches the wall more than 10 seconds sooner at 47.58 seconds.
What’s the difference?
Well, we know a lot more about human conditioning, the science behind how our bodies work and respond to different exercise. Athletes are now training differently to maximize performance. We also know that sprint athletes need different training than endurance athletes, so competitors have become faster and stronger, training specifically for their sport.
There is engineering going on for the outside factors that might enhance performance, as well.
“There is no natural athlete. In fact, [being an] elite athlete is a very unnatural way of life — but that doesn’t make it bad,”
— Andy Miah, media researcher — University of Salford
It begins with materials science and comes down to things like friction and lubrication.
Friction, where body parts rub together, can result in painful sores. Not enough friction, though, can inhibit balance and grip. For these, materials like polytetrafluoroethylene and silicone elastomers are known for their low resistance measure, causing less chafing.
Frictional heat can also cause injury. So, today’s athletic wear is equipped with fabrics that absorb and expel heat and maintain an ideal skin temperature. Some materials are also equipped with lubrication to reduce friction and manage moisture.
These materials also need to be durable yet flexible.
Other materials enhance an athlete’s ability by making minute changes to aspects of their bodies.
For example, some gear is fitted with compression technology — originally developed to mitigate circulatory issues in medical patients — that increases blood flow, subsequently reducing muscle exhaustion. Gear might also be made of materials with coatings that repel water using nanotechnology.
And footwear with carbon plates offer the runner enhanced energy return.
These sorts of performance-enhancing materials, however, strike some as unfair advantages. Some call this “tech doping.”
Doping, or taking performance-enhancing drugs, cost cyclist Lance Armstrong seven Tour de France victories and an Olympic bronze medal in 2012.
While technology isn’t being ingested to increase performance, it is still altering an athlete’s physical ability with enhancements. And while the Olympic athletes are monitored for drug doping by a global agency, the yays and nays of gear is left up to each sport’s own regulatory authority, like when World Aquatics banned full-body swim suits after swimmers who wore the LZR Racer set 93 world records. The organization banned the suit because it reduced muscle vibration and smoothed skin texture.
At the current summer Olympic Games taking place in Paris, more enhancing equipment is being used, like Nike’s super spike running shoes, which are reported to improved running performance by 1.5 percent.
IMAGE: Unsplash
“Elite sports performances are always a combination of biological capability and the training of that ability through technological means,” says Andy Miah, a
media researcher at the University of Salford.
Miah published a book on the topic in 2018 and helps investigate doping technology for the World Anti-Doping Agency (WADA) and the British Government. He is often consulted for his opinions on new technologies. Additionally, Miah serves as an academic adviser to the International ESport Federation.
“There is no natural athlete. In fact, [being an] elite athlete is a very unnatural way of life — but that doesn’t make it bad,” Miah says.
That may be true, but it doesn’t necessarily make it fair, either. If everyone had access to everything, it may even out the playing field, but Nike’s previously mentioned super shoe, for example, can be worn only by athletes sponsored by Nike.
As with most things, it will come down to what’s fair and doing the right thing.
The slogan for anti-doping in the sports world is ”Play True.” It’s just a matter of finding the definition of what this means in the age of technology.
The domain google.com was registered on September 15, 1997. Prior to that, Google’s founders, Larry Page and Sergey Brin, were a couple of computer science doctoral candidates at Stanford University.
Take two theses, one algorithm, an initial prototype that used nearly half of Stanford’s entire network bandwidth, and a patent citing another patent that turned into the Chinese search engine Baidu, and you’ve got Google, a trillion dollar tech company.
LISTEN TO THE DEEP DIVE
And it all started with a university research project.
The university research community has always been under outside pressure — political, economic and institutional — that has had the potential to impact, for better or worse, the nature and direction of academic research. In recent years, a new type of pressure has descended on university-based research: increased emphasis on the commercialization of research.
Commercialization is the process by which a product or service is introduced to the market. It is the entrepreneurial push that translates research discoveries and new technologies from laboratory to market. Universities around the world offer incubation and accelerator programs and assistance to commercialize the research conducted in their facilities.
This makes sense: Research that can be used to solve pressing problems or improve quality of life are most impactful when in the hands of those who can benefit from them. To reach these people, research needs to hit the market. Additionally, taking innovations to market also provides an economic benefit. Whether it be through licensing technology to other companies or developing startups, commercialization provides new revenue streams.
A CRUCIAL ROLE
“Universities play a crucial role in society as producers and transmitters of knowledge,” says Parimal Patel, University of Sussex. “In recent years, the discussion about whether universities can encompass a third mission of economic development, in addition to research and teaching, has received greater attention. Many have argued that within the remit of the third mission, university-industry research collaborations are extremely important mechanisms for generating technological spillovers. At the same time, many governments have introduced an increasing range of policies encouraging the involvement of universities in technology transfer.”
Things have not always been so. Licensing of inventions by academics became prevalent only in the early 20th century: In 1908, Frederick Cottrell received a patent to reduce industrial pollution, and in 1925, the University of Wisconsin-Madison founded its technology-transfer office to disseminate Harry Steenbock’s discovery that irradiating food to increase vitamin D could treat rickets.
Quaker Oats requested that technology, and the office licensed it in 1927.
IMAGE: Abjad Design
The UK established the National Research Development Corporation in 1948, leading to the first hovercraft in the 1950s, but it took until 1985 for an increase in academic entrepreneurship to appear.
Things changed in the US with the 1980 Bayh-Dole Act. Formerly known as the Patent and Trademark Act Amendments, the Bayh-Dole Act created a uniform patent policy among the federal agencies that fund research, motivating more and more universities to become actively involved in the transfer of technology from lab to market. In the US in 2018, approximately USD$2.94 billion in licensing revenue was generated directly from technology transfer.
Now, there’s another push.
THE ARAB WORLD ENTERS THE CHAT
Sami Bashir, director of Khalifa University’s technology management and innovation office, says it is increasingly evident that universities in the Middle East want to make their mark in the world of research and development through sponsored research and technology transfer.
“In recent years, there has been a great emphasis in the Arab world for universities to incorporate an ‘economic development mission’ within their strategic vision and operation so as to contribute towards their local and regional economies,” Bashir says. “Innovation and entrepreneurship have become cornerstones for the vision of new economies in this region. Universities are viewed as promising outlets that not only provide scientific discoveries, but can also create business opportunities in the form of technology-based startups.”
DWINDLING RESOURCES
Bashir says he believes the drive for economic benefits from scientific research stems from the global economic downturn and the drop in oil prices. He says most Arab countries have relied on natural resources, such as oil and minerals, to support their economies, but these resources face scarcity and environmental challenges that would slow or hinder their economies in the near- and long-term. Accordingly, he says, research and education funding has increased in most Arab countries.
“Technology patenting and commercialization has increasingly led to significant advances in cutting-edge research, focusing primarily on innovations in life sciences, information technology, and software and data management,” Bashir says. “Unfortunately, the existing regulatory framework does not suit development of new technologies, nor the creation of technology-based startups, but this is changing. Additionally, universities are steadily being regarded as more relevant to the technology marketplace and easy to do business with. As a result, more universities have begun to create formal research-administration or technology-transfer offices to support translation of business ideas into viable technology products or processes.”
NOT EVERYONE IS A FAN, THOUGH
Ubaka Ogbogu, associate professor in the Faculty of Law at the University of Alberta, Canada, says the increasing push to commercialize university research has emerged as a significant science-policy challenge, with socio-economic benefits but also potential risks that are not as often considered.
IMAGE: Abjad Design
“Studies of research-policy trends suggest that the commercialization ethos and associated pressures are unlikely to relent anytime soon and may, in fact, become the central or defining mission of university-based research,” Ugbogu said. “These studies also show that the push to commercialize is almost always presented as an unqualified social good that warrants broad governmental and institutional focus and support. Conversely, its risks and challenges are largely absent from policy statements and discussions.
A 2014 Pew Research Center survey of members of the American Association for the Advancement of Science found that 47 percent believed the pressure to develop marketable products was having an undue influence on the direction of their research, while 69 percent viewed a focus on projects expected to yield rapid results as having a similar influence.
Hyun Ju Jung and Jeongsik Lee, both at the Georgia Institute of Technology, reviewed nanotechnology patents filed between 1996 and 2007 in a study conducted in 2014, finding that the “government-initiated emphasis on commercialization” of US university research “may undermine open paths towards novel technologies and hinder explorations of unknown fields.”
NARROWING RESEARCH SCOPE
The government-initiated emphasis in this case came in the form of the National Nanotechnology Initiative (NNI), a US government science and technology program launched in 2000. Jung and Lee consider the NNI a policy intervention that targeted the commercialization of technology with a focused research direction to promote national economic growth. They found that once the NNI was implemented, US universities have benefited from increased interest — and funding — from industry but have narrowed down their research scope. This ultimately reduces their discovery of potential novel technologies, meaning they are less likely to generate technological breakthroughs — which “appear[s] to be inconsistent with the NNI’s objectives,” as the authors say.
Nanotechnology may be a narrow area to focus on, but these findings do suggest that a focus on commercialization forces a narrow focus for research.
Ogbogu was hardly surprised: “Several studies have found associations between commercialization activity and data withholding, the erosion of collaborative research relationships, and an unwillingness or reluctance to engage in certain research trends, such as open science initiatives, which conflict with the financial considerations that underlie the pursuit of commercialization.”
A POSITIVE IMPACT THROUGH KNOWLEDGE
One important aspect of knowledge sharing is the capacity to move research results from the laboratory into new or improved products and services in the marketplace. Commercialization of research is an important part of how science makes it to the public, which Ogbogu acknowledges. “It is a primary means through which medical products and services reach the market and consumers, which can, in turn, advance public health.”
He’s not wrong: A study by Boston University found 153 drugs and vaccines were developed by public research institutions between 1981 and 2011. The Covid-19 mRNA vaccine originated from research at a University of Pennsylvania bench.
Consider also, that sharing knowledge from a university in an open-access manner would result in another company springing up to profit from its usage. If a company will exist or a license could be issued anyway, why shouldn’t a university benefit directly?
This is where the publish-versus-patent argument comes in.
PUBLISHING DILEMA
In most jurisdictions, a patent cannot be obtained if an invention was previously known or used by other people in the US. Understandable, but publishing results counts as making an invention known. To be awarded a patent, you have to file your application before you publish, speak about or present your work.
In a publish-or-perish world, however, researchers can hardly afford to not publish papers, present at meetings or discuss their work.
Gangotri Dey works in Cornell University’s technology-transfer office, focusing on the physical sciences. She recognizes that the main goal of most of the university’s inventors is to publish their work in peer-reviewed journals but highlights that this differs between colleges: “A newly appointed assistant professor in the chemistry department is more eager to publish, whereas a person from an engineering college will likely think of patenting their invention before it is sent out for publication.”
IMAGE: Abjad Design
In Dey’s experience, of the academics that do file and secure a patent, less than 10 percent are licensed to companies, with life sciences and the medical school securing the most funding. The physical science division brings in less than 10 percent of the total revenue, showing that market success also tends to be field-specific and university goal-oriented. The other issue is the timeline.
“A typical patent takes about four years to be issued,” says Dey. “This varies and some fields are so heavily backlogged it may take ten years to get a patent. I assume there is no peer-review journal article that takes this long! My biggest concern though is that we are comparing apples to oranges in this scenario. A peer-reviewed journal article should be for the basic science that needs to be communicated to the public that is paying for this research with their taxes. A patent is filed to benefit the public from a ready product. You can win a Nobel Prize for an invention, but you might not be able to patent that same invention. In my view, you can’t compare the two.”
So is it possible to have the best of both worlds? At the Khalifa University technology-transfer office, Bashir says with a laugh: “That’s where we come in!”
Time to visit your local TTO, folks.
THE SHIFT TO STARTUPS
In recent years, there has been a paradigmatic shift toward commercializing technology through startups, rather than patents. University inventions tend to need substantial development before they are ready to go to market, and universities are now trending toward funding these startups. Potential is evident: Stanford University alone birthed Google and HP.
Thomas Astebro, professor of entrepreneurship at HEC Paris, says the dramatic increase in the rate of university spinoffs can be attributed to the germination of biomedical research in the 1970s; the passage of the Bayh-Dole Act in 1980; increased financing of research by industry; changes in university guidelines and behavior; and changes in the scientific ethos of faculty and researchers.
Creating companies takes extensive work, expertise and focus, and academic institutions are not historically designed or optimized for this. Those that can shift focus quickly and create and support startup companies built around innovations designed within their walls can increase the likelihood that those innovations make an impact. Just as university research creates many innovations, universities can also participate in the startup-creation process in many ways.
LOCAL CHALLENGES
“We can and should learn from the experiences of universities in the US and Europe, but the adoption of impactful technology-transfer models in the Arab world must be established through our own learning and experiences in ever-changing operating environments,” Bashir says. He says he believes universities in the Arab region experience challenges that can be categorized as internal and external, with the most pressing being the adoption of intellectual-property policies.
Among internal challenges, most universities seem to lack policies and guidelines that clarify the rights of researchers whose discoveries are commercialized. The lack of such policies renders researchers more apprehensive in disclosing inventions to their universities or technology-transfer offices, Bashir says, which in turn reduces the chance of research commercialization.
Additionally, universities in the Middle East have been traditionally viewed as beit al hikma, or “houses of wisdom” — entities that provide academic scholarly activities, not industry-relevant applied research and development.
Establishing progressive external industry partnerships will be essential for attracting industry funds to university research activities and enhancing the delivery of research results to market.
IMAGE: Abjad Design
“The biggest challenge is we mostly deal with technology readiness level one or, at maximum, level two,” says Dey. Technology readiness levels are used to assess the maturity of a particular technology, with level one the lowest and level nine the highest. When a technology is at level one, scientific research is just beginning to be translated into future research and development, while level two occurs once the basic practical applications have been applied to those research findings. Level two is very speculative as there is little to no experimental proof of concept for the technology.
“University research does not easily translate into a patent, product or company at such an early stage,” adds Dey. “But this problem can be partially mitigated with more industry-university collaborative research or sponsored research projects.”
As far as external challenges, the issue of patent or IP law comes top of the list.
“Patent law in general has been enacted only recently in the Arab world; for instance, in Saudi Arabia in 1985,” Bashir says. “In most cases, the patent system was established to protect technologies and businesses coming from outside and not home-grown inventions and technologies. It’s clear that the patent legal framework here needs modernization and reforms to accommodate for the registration and protection of research discoveries coming out of universities.
“Technology transfer is not a stationary model. It is a dynamic and progressive model and continuously needs evaluation, assessment and modernization to be relevant and fit for purpose.”
If you were active on social media in the final months of 2022, odds are good you noticed a spike in avatars of your friends as fairies or anime characters or figures from a high-fantasy video game.
The images were from a company called Lensa, which uses artificial intelligence to turn selfies into art. And they had more than the social-media influencers buzzing. The technology set off a new wave of debate about the role of artificial intelligence in art as well as ethical issues involving racism, stolen images and revenge porn. But others look ahead to a future where AI assists artists instead of competing with them.
LISTEN TO THE DEEP DIVE
The Lensa app, which uses the Stable Diffusion deep-learning model to render images in various art styles, was not the first use of AI technology to disturb artists worried about being replaced by computers.
In 2018, a piece of digital art called Edmond De Belamy, which was generated by a machine-learning algorithm, sold at a Christie’s art auction for U.S.$432,500, well above its U.S.$10,000 estimate, setting off alarm bells among creatives fearing for jobs and the nature of art itself.
A similar cry erupted in September 2022 when Jason M. Allen won first prize in a digital category at the Colorado State Fair’s annual art competition with an AI-generated piece called Théâtre D’opéra Spatial. Allen used Midjourney, which translates text descriptions into digital artwork (and has been used to produce images in KUST Review).
CAPTION: Training apps with a wide variety of pictures of people from a wide range of ethnicities will help reduce AI bias, says Mutale Nkonde, founder and CEO of AI for the People.
But both images show that computer-generated art has more human involvement than the AI tag and Christie’s promotional language for Edmond De Belamy (“This portrait … is not the product of a human mind”) might lead you to believe.
Both pieces were products of humans: Edmond De Belamy by a Parisian art collective called Obvious. Both were initiated, selected, printed and promoted by those humans. And humans created the code that built them, infusing the final works with human aesthetics, biases and potential moral issues.
HUMANS BEHIND THE CODE
Remembering that it’s humans, not soulless code, ultimately behind the AI product is important to keep in mind, says Ziv Epstein, a Ph.D. student in MIT Media Lab’s Human Dynamics group who has an eye on the emerging technology.
“When we talk about AI as a creator instead of a tool, it undermines credit and responsibility to the artists involved in the creation of AI art,” Epstein tells KUST Review. “Anthropomorphizing AI can undermine our capacity to hold people responsible for the wrongdoings of sociotechnical systems when an AI system commits a moral transgression: The perceived agency of the AI could be a sponge, absorbing responsibility from the other human stakeholders.
“We must be careful how we talk about AI and fight the current conceptualization of AI, typified by corporate-metaphysical circuit brains or embodied androids, lit by blue light and here to take your job. These narratives are not neutral and often cut along lines of power.”
BAKED-IN BIASES
A wrongdoing Epstein might have in mind: Among initial users of the Lensa avatar generator, some people who wear hijabs and/or have dark skin reported that their images seemed to have more glitches than others’ or didn’t look much like them. And this cuts to deeper issues of racism and sexism baked into the code and reported on frequently in recent years.
“AI bias in art hurts Black and other communities of color in two very specific ways,” says Mutale Nkonde, founder and CEO of AI for the People and a UN advisor on AI and human rights. “The app Lensa used AI to create ‘artist’ impression avatars for users and a beauty filter that made non-white women appear more European. This may seem innocuous, but there is data that shows algorithmic recommendation systems used within the image-sharing app Instagram has been found to increase mental-health complaints among young girls because it amplifies images of women with unhealthy bodies.
“This could be true of women with non-European features who watch their physical appearance being erased and devalued,” she tells KUST Review. “This ethnic erasure contributes to the sales of skin-lightening creams in countries in Asia, Africa and the Gulf region and could result in women in these regions engaging in even more self-harming behavior.
A piece of digital art called Edmond De Belamy, which was generated by a machine-learning algorithm, sold at a Christie’s art auction for U.S.$432,500.
“The second concern is the data privacy of the people using these apps in order to work,” Nkonde says. “Users have to upload pictures, and in doing so give the company their biometric data which could be shared and/or sold to data brokers and then used to develop technologies like facial recognition. Facial-recognition systems in the West being used by law-enforcement agencies have problems recognizing people with dark skin and have led to the wrongful arrest of Black men.”
Again: Blame the humans behind the code.
EXPANDED DATASETS
Nkonde sees a solution, however.
“The best way to reduce these biases is by expanding the training datasets used to develop each app. In terms of the Europeanization of visual culture that means training those apps with a wide variety of pictures of people from a wide range of ethnicities. That way an Arab woman using it will be given an image that shows her unique beauty,” she says.
Without expanded datasets, apps and AI risk reflecting – and perhaps amplifying – biases.
“The Stable Diffusion model was trained on unfiltered internet content. So it reflects the biases humans incorporate into the images they produce,” Lensa says in its FAQ.
That unfiltered content used to train the model is also concerning to artists who fear their work is being used without their consent – and may damage their livelihoods by allowing the masses to replicate their style without paying for it.
ARTISTS WORRY
One of them is Greg Rutkowski, a Polish artist whose high-fantasy digital illustrations of defiant wizards and rampaging orcs are familiar to fans of such games as Dungeons & Dragons and Magic: The Gathering.
His style was commonly requested on Stable Diffusion before the model in November 2022 changed its code to make it harder to copy specific artists’ styles.
“It’s a cool experiment,” he says of the people who used his name as a prompt. “But for me and many other artists, it’s starting to look like a threat to our careers,” he tells the MIT Technology Review.
When we talk about AI as a creator instead of a tool, it undermines credit & responsibility to the artists involved in the creation of art.
Artists have countered with a site called Have I Been Trained, which allows creatives to search for examples of their own work among the 5.8 billion images scraped from the internet, including sites such as Pinterest, to train Stable Diffusion and Midjourney.
Some groups have responded by banning AI-generated art, including online artist community Newgrounds and visual-media company Getty Images, which cited fears of future copyright claims as laws eventually catch up with technology.
Among the laws catching up with the accelerating technology: The United Kingdom in November 2022 announced plans to criminalize the sharing of pornographic deepfakes, often created as a form of revenge porn victimizing primarily women who don’t know their faces have been digitally attached to others’ bodies.
At the same time it changed its code to make copying styles harder, Stable Diffusion also introduced changes that make creating pornographic content more difficult. AI systems Midjourney and DALL-E 2 had previously banned adult-content creation. But other systems remain accessible to deepfake abuses.
A PROMISING TOOL
Still, some creators remain optimistic about AI-assisted art.
Alexander Reben used a machine-learning algorithm called GPT-3 to slough off a creative slump during the early months of the COVID-19 pandemic.
The algorithm, a language model trained by OpenAI like ChatGPT, which came later, writes original text – essays, fiction, news articles, even dad jokes – from a prompt. Reben played with the tool until he learned he could prod it to write the sort of text one might find on a label next to a piece of art on a gallery wall.
Reben poured through the outputs until he found some he liked, then created in real life the art they described. A whimsical story about an anonymous art collective known as The Plungers that created art with actual toilet plungers, for example, became an IRL installation as part of a series the AI titled “AI Am I?”
CAPTION: AI training encompassing billions of images allows tools to produce a wide variety of styles. Artists, however, are concerned that their work has been scraped from the internet without their consent, possibly threatening their livelihoods.
“As technology becomes more of an extension and amplification of our minds – just as a wrench is an extension of our hands and amplifies our physical ability – AI becomes more of a collaborator rather than a calculator,” he writes for BBC.com. “Unlike creative tools of the past, such as Photoshop, photographs or pigments, we are now working with tools that seem to have generative imagination, but perhaps no ‘taste.’ The human in the loop adds an important curatorial role in determining the ‘good’ versus ‘bad.’”
Or as AI-avatar creator Lensa says in a tweet: “As cinema didn’t kill theater and accounting software hasn’t eradicated the profession, AI won’t replace artists but can become a great assisting tool.”
Architecture student Qasim Iqbal, for example, uses Midjourney to visualize his designs.
“With Midjourney primarily being a text-to-image generator, it encourages you to summarize and define ideas through words and teaches you to be specific,” he tells My Modern Met.
He says it helps him “test concepts, ideas and directions for projects,” but “it should never be the originator of the idea.”
COLLABORATORS, NOT COMPETITORS
Others are embracing the technology by trading the pen or the brush for the word to create visual art. This is the emerging domain of “promptology” or the “prompt engineer,” using a new set of skills to coax a desired image out of the models with carefully crafted text.
And then there is the utility of the technology for, well, anyone.
NightCafe, launched in 2019 and named after Vincent Van Gogh’s The Night Café, is one of the systems looking to fulfill the tech’s promise of democratized art:
“We create tools that allow anyone — regardless of skill level — to experience the satisfaction, the therapy, the rush of creating incredible, unique art,” it says, with the caveat that it does not seek to “make artists redundant.”
But for the “but is it art?” crowd, there’s still opportunity to invest skill, thought, talent and effort beyond the push of a button to create with AI tools.
Allen, the Colorado State Fair winner, spent 80 hours on Midjourney and sifted through 900 images before he settled on a picture to print on canvas.
Remembering that it’s humans, not soulless code, ultimately behind the AI product is important to keep in mind, says Ziv Epstein, a Ph.D. student in MIT Media Lab’s Human Dynamics group
Other artists take much longer for their process, investing considerable time and brainpower to learn the technology and make tweaks to the code for the specific result they seek.
“Using machine learning is such a steep learning curve for me,” says Jake Elwes in the paper “AI and the Arts: How Machine Learning Is Changing Artistic Work.
“I understand enough of the technology to use it and hack it, but I’m not writing algorithms myself, so it often takes months of research to work out how to use a model and get it to do what I want it to do. To be able to see some of my artistic voice coming through a black box or a ready-made, and then find an interesting way of subverting it. It’s a long process, not something you can just play with lightly.”
The same might be said for the technology itself.
Join our mailing list
Get the latest articles, news and other updates from Khalifa University Science and Tech Review magazine