By Robert L. Gordon III, Chief Growth Officer, SBG Technology Solutions, Inc.
The ongoing threat our species faces around the attack of a global enemy – COVID19 – challenges us to think about improving the design, development and deployment of technologies to successfully combat the virus, eventually win, and return to some sense of normalcy.
One of the technologies that has received a lot attention across a multitude of sectors, and especially Healthcare, is Artificial Intelligence (AI). AI has improved workflows of back-end operations, assisted with rapid classification of diseases, and exploited opportunities for its predictive analytical prowess. Improved doctor – patient interaction, assisted living, text based counseling and a long list of other health related areas have experienced dramatic and meaningful improvements with the deployment of AI.
Yet, many professionals in the commercial and Government sectors still do not have a firm grasp of AI’s capabilities, potential advantages, much less its shortcomings. Overpromises, complex explanations, unwieldy processes, and theoretical applications have thwarted health sector leadership and innovators in their attempt to fully exploit AI’s powerful applications, and then practically and effectively achieve breakthroughs to address real world challenges, problems and opportunities.
In a series of upcoming articles, I will be sharing what I hope are insightful nuggets of information to help you make more informed decisions on AI’s applicability and subsequent use. The short of it is that recent improvements in the AI landscape puts more creative power into more human hands to achieve significant breakthroughs in the health care sector.
AI 1.0 has Value, but is Often Inaccessible
One of the first things you should know is that AI is in transition, moving from a 1.0 to a 2.0 version in terms of improved capabilities and ease of use. AI 1.0 established the efficacy that AI could be used to solve hard data rich problems. With AI 1.0, computers use machine learning (ML) to crunch prodigious amounts of data to continuously get smarter and eventually produce a result, i.e., recognizing and identifying an object on an MRI image.
Yet, AI 1.0 has challenges that prevented it from being “all it can be.” There are four main reasons for this.
In an AI 1.0 environment, models, data structuring and data cleaning have to be created from scratch, and it is not uncommon for a full scale deployment to take as long as 18 months, often requiring legions of highly educated and highly paid data scientists to prepare the AI process for deployment.
It is extremely, and often prohibitively, expensive to build and run AI 1.0 platforms, which often require expensive and numerous graphical processing units, or GPU’s, in a cloud environment. AI 1.0 can often cost anywhere from $10 million to as much $100 million from ideation to full-scale deployment. As such, it was difficult for many companies to afford AI and time consuming to leverage its capabilities to the fullest extent.
As we all know too well, data is all over the place …dispersed on laptops, in desktop hard drives, on private servers, in the cloud and at the edge (local hardware devices). One challenge AI 1.0 faces is that in order to use this highly distributive sea of data, the data must be migrated to one place – usually the cloud – in order for users to exploit the cloud’s computing capabilities. Moreover, some vital data sitting on a worker’s computer or in an edge device of a colleague never makes it to the data lake.
AI 1.0 can be inefficient, based on the often duplicative efforts to identify, structure, migrate and provision all of that data mentioned in my third point…which is cumbersome and takes a lot of time and effort to navigate and manage.
You get the gist. AI 1.0 is not very accessible. What to do?
AI 2.0 is All About Accessibility
Recent innovations in AI 2.0 address the issue of AI 1.0 inaccessibility, with its exorbitant cost, lengthy deployment times and duplicative and inefficient data management. How?
Ease of Use
If you look under the hood of AI 2.0 platforms, such as Sentrana’s DeepCortex, they have drag-n-drop, plug and play building blocks that allow users to quickly build custom models. The result is that a simpler, understandable user interface (UI) and user experience (UX) make AI modeling accessible to more than just data scientists. Ordinary analysts are now empowered to create models.
New AI 2.0 platforms are compact, portable and scalable operating systems that fully automate what’s called the AI development lifecycle; using a process called “transfer learning. “ I will explain transfer learning in greater detail in a subsequent article. Just know that transfer learning uses much less data in much less time to dramatically speed up AI’s learning and execution process with no significant loss in accuracy.
AI 2.0 maximizes collaboration. Those building blocks I mention above are readily available to other users of AI 2.0 platforms. Moreover, accumulating an inventory of building blocks creates a collective “memory.” This collaborative “network effect” makes these building blocks accessible to other users on the system. Think of the “share economy” where ride share, home share, and picture share allow new efficiencies in transportation, tourism and social networking. This same effect happens in AI 2.0 systems.
Moreover, AI 2.0 platforms can harness something called a “replication engine,” to collect and assemble dispersed data from various sources – desktops, laptops, the cloud and the edge – and bring it to the user to construct more powerful AI models. Consequently, any asset created by a colleague anywhere can be available to a collaborative user to innovate with AI.
The result – a streamlined, automated, accessible AI platform that doesn’t have to be built from scratch. This puts immediate creative power into the hands of innovative practitioners and makes it easier for them to achieve breakthroughs with respect to health care related problems and opportunities. The future is now – you can exploit accessible AI 2.0 platforms to better service your customer and empower your people to make meaningful change.
In my next article, I will do a deeper dive into the concept of “transfer learning” to help readers understand the ways in which this innovation has sped up the time in which AI 2.0 platforms can achieve precise and timely results.
About Robert L. Gordon
Robert L. Gordon III, Chief Growth Officer at SBG Technology Solutions www.sbgts.com, has over 30 years of cross sector experience, including overseeing non clinical health solutions for the Department of Defense as the former Deputy Under Secretary of Defense, Military Community and Family Policy. He is also a former senior executive of a technology and services company focused on improved health care for seniors in assisted living facilities. You can reach Rob Gordon at firstname.lastname@example.org.