Categories
Uncategorised

New report outlines SA’s biggest challenges to AI adoption

Take yourself back to February 2020. Life was relatively normal, kids were at school, we physically went into work, and everyone was more certain of the paths they were on. A year later, people of all ages are now a lot more tech savvy, having been forced to work-from-home, do online schooling or have online gatherings, just to keep in touch with loved ones. We have had to embrace the change, and step out of our comfort zones, learning how to use technology to navigate everyday life. While it’s true that South Africa is still behind in digitization, it’s catching up fast thanks to COVID-19, catalyzed by boardrooms across the country focusing on digitization like never before.

One such focus is the efficiency driven by Artificial Intelligence and Machine Learning (AI/ML). SafriCloud surveyed SA’s leading IT decision makers to assess the sentiment and adoption outlook for these technologies amongst business and IT professionals. The results have been published in an eye-opening report entitled, ‘AI: SA – The state of AI in South African businesses 2021’.

‘Keen to start but facing a few challenges’ was the pervasive theme across the survey respondents, but with the global Machine Learning market projected to grow from $7.3 billion in 2020 to $30.6 billion by 2024*, why do we still see resistance to adoption?

Nearly 60% of respondents said that their business supports them in their desire to implement AI/ML and yet only 25% believed that it is understood well at an executive level. While ‘fear of the unknown’ ranked in the top three adoption challenges both locally and internationally (Gartner, 2020), only 9.34% of respondents cited ‘lack of support from C-suite’ as a challenge.

There is a clear degree of pessimism to the level of skills and knowledge to be found in the South African market. This pessimism is more exaggerated at a senior management level where more than 60% rated ‘low internal skill levels’ as the top challenge facing AI/ML adoption. With nearly 60% of the respondents rating the need to implement AI/ML in the next two years as ‘important’ to ‘very important’ and only 35% of businesses saying they currently have internal resources focused on AI/ML, the skills gap will continue to grow.

Artificial Intelligence and Machine Learning represent a new frontier in business. Like previous generations that faced new frontiers – such as personal computing and the industrial revolution – we can’t predict what these changes might lead to. All we can really say is that business will be different, jobs will be different and how we think will be different. Those open to being different will be the ones that succeed.

Get free and instant access to the full report, to discover whether your business is leading the way or falling behind: https://www.safricloud.com/ai-sa-the-state-of-ai-in-south-african-businesses/

Report highlights include:

  • The areas of AI/ML that are focused on the most.
  • The state of the AI job market and how to hire.
  • Practical steps to train and pilot AI/ML projects.
Categories
Artificial Intelligence

Fear of the Unknown: Artificial Intelligence

Artificial Intelligence (AI) will be the most popular and developed technological trend in 2020 with a market value projected to reach $70 billion.

AI is impacting several areas of knowledge and business, from the entertainment sector to the medical field where AI is utilizing high-precision algorithms through machine learning that can produce more accurate diagnoses and detect symptoms of serious diseases at a much earlier stage.

The innovation that AI offers to industry, businesses, and consumers is positively changing all processes. The new decade will be driven by the rise of automation and AI-induced robotics.

However, there is a huge exaggeration and hysteria about the future of Artificial Intelligence and how humans will need to adapt and get used to living with it. In fact, AI is a topic that has polarized popular opinion. What is true is that AI will become the core of everything that humans interact within the coming years, and beyond. Hence, to have a clear opinion about AI and its impact, it is important to understand what it is and what are the types of artificial intelligence that exist.

General Artificial Intelligence (AGI) is the type of AI that can perform any cognitive function in the way a human does. The technology is not there yet but it is developing at a fast pace and there are interesting AI projects such as Elon Musk’s Neuralink. 

Today, narrow AI applications, intended to develop only one task, such as IBM Watson, Siri, Alexa, Cortana, and others are the ones that share the world with us. The key difference between the AGI or wide artificial intelligence and the narrow or weak AI is the goal setting and the volition.

In the future, AGI will have the ability to reflect on its own objectives and decide whether to adjust them or not and to what extent. We have to admit that, if done right, this extraordinary technological achievement will change humanity forever.

However, there is still a long way to go to get to that point. Despite this, many fear that Super Artificial Intelligence (ASI) will one day go beyond human cognition, also known as the technological singularity.

At the moment, in society, there are two emerging and visible groups: on the one hand, the public is informed- in this group, trust towards new and emerging technologies has been increasing over time. On the other hand, there is the mass population -a group where trust remains stagnant.

Of course, social networks also play a role here. It’s not just about consumption, but about amplification, with people who share news more than ever and discuss issues relevant to them. Confidence used to be from top to bottom, but now it is established horizontally from equal to equal.

Will AI benefit or destroy society?

AI can only become what humans want it to become. Humans have the task of coding their AI creations. If the mass population is increasingly anxious about AI, this is due to fear of the unknown. Perhaps it is also because there is very little information available about the benefits AI offers to balance with those who believe that AI will destroy society and take away their jobs.

For now, AI has only been providing great benefits and its coverage in the medium term can only benefit and optimize many areas of human activity.  

Categories
Machine Learning

Python vs. Java: Uses, Performance, Learning

In the world of computer science, there are many programming languages, and no single language is superior to another. In other words, each language is best suited to solve certain problems, and in fact there is often no one best language to choose for a given programming project. For this reason, it is important for students who wish to develop software or to solve interesting problems through code to have strong computer science fundamentals that will apply across any programming language.

Programming languages tend to share certain characteristics in how they function, for example in the way they deal with memory usage or how heavily they use objects. Students will start seeing these patterns as they are exposed to more languages. This article will focus primarily on Python versus Java, which are two of the most widely used programming languages in the world. While it is hard to measure exactly the rate at which each programming language is growing, these are two of the most popular programming languages used in industry today.

One major difference between Python and Java is that Python is dynamically typed, while Java is statically typed. Loosely, this means that Java is much more strict about how variables are defined and used in code. As a result, Java tends to be more verbose in its syntax, which is one of the reasons we recommend learning Python before Java for beginners. For example, here is how you would create a variable named numbers that holds the numbers 0 through 9 in Python:

numbers = []

for i in range(10):

numbers.append(i)

Here’s how you would do the same thing in Java:

ArrayList numbers = new ArrayList();

for (int i = 0; i < 10; i++) {

numbers.add(i);

}

Another major difference is that Java generally runs programs more quickly than Python, as it is a compiled language. This means that before a program is actually run, the compiler translates the Java code into machine-level code. By contrast, Python is an interpreted language, meaning there is no compile step.

Usage and Practicality

Historically, Java has been the more popular language in part due to its lengthy legacy. However, Python is rapidly gaining ground. According to Github’s State of the Octoberst Report, it has recently surpassed Java as the most widely used programming language. As per the 2018 developer survey, Python is now the fastest-growing computer programing language.

Both Python and Java have large communities of developers to answer questions on websites like Stack Overflow. As you can see from Stack Overflow trends, Python surpassed Java in terms the percentage of questions asked about it on Stack Overflow in 2017. At the time of writing, about 13% of the questions on Stack Overflow are tagged with Python, while about 8% are tagged with Java!

Web Development

Python and Java can both be used for backend web development. Typically developers will use the Django and Flask frameworks for Python and Spring for Java. Python is known for its code readability, meaning Python code is clean, readable, and concise. Python also has a large, comprehensive set of modules, packages, and libraries that exist beyond its standard library, developed by the community of Python enthusiasts. Java has a similar ecosystem, although perhaps to a lesser extent.

Mobile App Development

In terms of mobile app development, Java dominates the field, as it is the primary langauge used for building Android apps and games. Thanks to the aforementioned tailored libraries, developers have the option to write Android apps by leveraging robust frameworks and development tools built specifically for the operating system. Currently, Python is not used commonly for mobile development, although there are tools like Kivy and BeeWare that allow you to write code once and deploy apps across Windows, OS X, iOS, and Android.

Machine Learning and Big Data

Conversely, in the world of machine learning and data science, Python is the most popular language. Python is often used for big data, scientific computing, and artificial intelligence (A.I.) projects. The vast majority of data scientists and machine learning programmers opt for Python over Java while working on projects that involve sentiment analysis. At the same time, it is important to note that many machine learning programmers may choose to use Java while they work on projects related to network security, cyber attack prevention, and fraud detection.

Where to Start

When it comes to learning the foundations of programming, many studies have concluded that it is easier to learn Python over Java, due to Python’s simple and intuitive syntax, as seen in the earlier example. Java programs often have more boilerplate code – sections of code that have to be included in many places with little or no alteration – than Python. That being said, there are some notable advantages to Java, in particular its speed as a compiled language. Learning both Python and Java will give students exposure to two languages that lay their foundation on similar computer science concepts, yet differ in educational ways.

Overall, it is clear that both Python and Java are powerful programming languages in practice, and it would be advisable for any aspiring software developer to learn both languages proficiently. Programmers should compare Python and Java based on the specific needs of each software development project, as opposed to simply learning the one language that they prefer. In short, neither language is superior to another, and programmers should aim to have both in their coding experience.

PythonJava
Runtime PerformanceWinner
Ease of LearningWinner
Practical AgilityTieTie
Mobile App DevelopmentWinner
Big DataWinner

This article originally appeared on junilearning.com

Categories
Artificial Intelligence

5 Key Challenges In Today’s Era of Big Data

Digital transformation will create trillions of dollars of value. While estimates vary, the World Economic Forum in 2016 estimated an increase in $100 trillion in global business and social value by 2030. Due to AI, PwC has estimated an increase of $15.7 trillion and McKinsey has estimated an increase of $13 trillion in annual global GDP by 2030. We are currently in the middle of an AI renaissance, driven by big data and breakthroughs in machine learning and deep learning. These breakthroughs offer opportunities and challenges to companies depending on the speed at which they adapt to these changes.

Modern enterprises face 5 key challenges in today’s era of big data

1. Handling a multiplicity of enterprise source systems

The average Fortune 500 enterprise has a few hundred enterprise IT systems, all with their different data formats, mismatched references across data sources, and duplication

2. Incorporating and contextualising high frequency data

The challenge gets significantly harder with increase in sensoring, resulting inflows of real time data. For example, readings of the gas exhaust temperature for an offshore low-pressure compressor are only of limited value in of itself. But combined with ambient temperature, wind speed, compressor pump speed, history of previous maintenance actions, and maintenance logs, this real-time data can create a valuable alarm system for offshore rig operators.

3. Working with data lakes

Today, storing large amounts of disparate data by putting it all in one infrastructure location does not reduce data complexity any more than letting data sit in siloed enterprise systems. 

4. Ensuring data consistency, referential integrity, and continuous downstream use

A fourth big data challenge is representing all existing data as a unified image, keeping this image updated in real-time and updating all downstream analytics that use these data. Data arrival rates vary by system, data formats from source systems change, and data arrive out of order due to networking delays.

5. Enabling new tools and skills for new needs

Enterprise IT and analytics teams need to provide tools that enable employees with different levels of data science proficiency to work with large data sets and perform predictive analytics using a unified data image.

Let’s look at what’s involved in developing and deploying AI applications at scale

Data assembly and preparation

The first step is to identify the required and relevant data sets and assemble them. There are often issues with data duplication, gaps in data, unavailable data and data out of sequence.

Feature engineering

This involves going through the data and crafting individual signals that the data scientists and domain experts think will be relevant to the problem being solved. In the case of AI-based predictive maintenance, signals could include the count of specific fault alarms over the trailing 7 days,14 days and 21 days, the sum of the specific alarms over the same trailing periods; and the maximum value of certain sensor signals over those trailing periods. 

Labelling the outcomes

This step involves labeling the outcomes the model tries to predict. For example, in AI-based predictive maintenance applications, source data sets rarely identify actual failure labels, and practitioners have to infer failure points based on a  combination of factors such as fault codes and technician work orders.

Setting up the training data

For classification tasks, data scientists need to ensure that labels are appropriately balanced with positive and negative examples to provide the classifier algorithm enough balanced data. Data scientists also need to ensure the classifier is not biased with artificial patterns in the data.

Choosing and training the algorithm

Numerous algorithm libraries are available to data scientists today, created by companies, universities, research organizations, government agencies and individual contributors.

Deploying the algorithm into production

Machine learning algorithms, once deployed, need to receive new data, generate outputs, and have some actions or decisions be made based on those outputs. This may mean embedding the algorithm within an enterprise application used by humans to make decisions – for example, a predictive maintenance application that identifies and prioritizes equipment requiring maintenance to provide guidance for maintenance crews. This is where the real value is created – by reducing equipment downtime and servicing costs through more accurate failure prediction that enables proactive maintenance before the equipment actually fails. In order for the machine learning algorithms to operate in production, the underlying compute infrastructure needs to be set up and managed. 

Close-loop continuous improvement

Algorithms typically require frequent retraining by data science teams. As market conditions change, business objects and processes evolve, and new data sources are identified. Organizations need to rapidly develop, retrain, and deploy new models as circumstances change.

Therefore, problems that have to be addressed to solve AI computing problems are nontrivial. Massively parallel elastic computing and storage capacity are prerequisites. In addition to the cloud, there is a multiplicity of data services necessary to develop, provision, and operate applications of this nature. However, the price of missing a transformational strategic shift is steep. The corporate graveyard is littered with once-great companies that failed to change.

This article originally appeared on Makeen Technologies.

Categories
Machine Learning

The Future of HR from 2020: Machine Learning & Deep Learning

The future of HR lies in Deep Learning which is steroid machine learning. It uses a technique that gives machines an improved ability to find, and amplify, even the smallest patterns. This technique is called a deep neural network: deep because it has many layers of simple computational nodes that work together to search for data and deliver a final result in the form of prediction.

Neural networks were vaguely inspired by the inner workings of the human brain. The nodes are like neurons and the network is like the brain itself. But Hinton published his breakthrough at a time when neural networks had gone out of style. No one really knew how to train them, so they were not giving good results. The technique took almost 30 years to recover. But suddenly, it emerged from the abyss.

One last thing we should know in this introduction: machine learning (and deep) comes in three packages: supervised, unsupervised and reinforced.

In supervised learning, the most frequent, the data is labeled to indicate to the machine exactly what patterns to look for. Think of it as something like a tracking dog that will chase the targets once you know the wrapper you’re looking for. That’s what you are doing when you press play on a Netflix program: you are telling the algorithm to find similar programs.

In unsupervised learning, the data has no tags. The machine only searches for any pattern it can find. This is like letting a person check tons of different objects and classify them into groups with similar wrappers. Unsupervised techniques are not as popular because they have less obvious applications but curiously, they have gained strength in cybersecurity.

Finally, we have reinforcement learning, the last frontier of machine learning. A reinforcement algorithm learns by trial and error to achieve a clear objective. He tries many different things and is rewarded or penalized depending on whether his behaviors help or prevent him from reaching his goal. This is like when a child behaves well with a praise and affection. Reinforcement learning is the basis of Google’s AlphaGo, the program that surpasses the best human players in the complex Go game.

Applied to Human Resources, although the growth potential is wide, the current use of Machine Learning is limited and presents a dilemma that must be resolved in the future, related to the ability of machines to discover talent in human beings, beyond their hard and verifiable competencies, such as level of education, etc.

Software intelligence is transforming human resources. At the moment it has its main focus on recruitment processes, which in most cases is a very expensive and inefficient process where our goal is to find the best candidates among thousands of them, although we can find multiple application examples.

A first example would be the development of technology that would allow people to create job descriptions that are gender-neutral to attract the best possible candidates, whether male or female. This would boost a group of job seekers and a more balanced population of employees.

A second example is the training recommendations that employees could receive. On many occasions these employees have many training options, but often they cannot find what is most relevant to them; Therefore, these algorithms present the internal and external courses that best suit the employee’s development objectives based on many variables, including the skills that the employee intends to develop and the courses taken by other employees with similar professional objectives.

A third example will be Sentiment Analysis, which is a form of NLP (Natural Language Processing) that analyzes the social conversations that are generated on the Internet to identify opinions and extract the emotions (positive, negative or neutral) that these implicitly carry. With the sentiment analysis it is determined:

-Who is the subject of the opinion.

-About what is being said.

-How is the opinion: positive, negative or neutral.

This tool can be applied to words and expressions, as well as phrases, paragraphs and documents that we find in social networks, blogs, forums or review pages. The sentiment analysis will determine the hidden connotation behind the information that is subjective.

There are different systems of sentiment analysis:

-Analysis of feeling by polarity: Opinions are classified as very positive, positive, neutral, negative or very negative. This type of analysis is very Simple with reviews made with scoring mechanisms from 1 to 5, where number 1 is very negative and 5 is very positive.

-Analysis of feeling by type of emotion: The analysis detects emotions and specific feelings: happiness, sadness, anger, frustration, etc. For this, there is usually a list of words and the feelings with which they are usually related.

-Sentiment analysis by intention: This system interprets the comments according to the intention behind: Is it a complaint? A question? A request?

A fourth example is the Employee Attrittion through which we can predict which employees will remain in the company and which will not be based on several parameters as shown in the following example-

A screenshot of a cell phone

Description automatically generated
Source: IBM (IBM Watson sample dataset)

These four cases are clear examples in which Machine Learning elevates the role of human resources from tactical processes to strategic processes. Smart software is enabling the mechanics of workforce management, such as creating job applications, recommending courses or predicting which employees are more likely to leave the company, giving the possibility to react in time and apply corrective policies for those deficiencies.

From the business point of view, machine learning technology is an opportunity to drive greater efficiency and better efficiency in decision making. This will help everyone to make better decisions and, equally important, will give Human Resources a strategic and valuable voice at the executive level.

Prof Raul Villamarin Rodriguez

Categories
Artificial Intelligence

BABYLON: THE GROWING AI TREND IN THE HEALTHCARE INDUSTRY

Artificial intelligence is not new, yet there have been quick advances in the field as of late. This has to a limited extent been empowered by improvements in processing power and the colossal volumes of advanced information that are presently produced. A wide scope of utilizations of AI are currently being investigated with significant open and private speculation and premium. The UK Government reported its aspiration to make the UK a world head in AI and information advancements in its 2017 Industrial Strategy. In April 2018, a £1bn AI part bargain between UK Government and industry was reported, including £300 million towards AI research. AI is commended as having the capacity to help address significant wellbeing challenges, for example, meeting the consideration needs of a maturing populace. Significant innovation organizations – including Google, Microsoft, and IBM – are putting resources into the improvement of AI for human services and research. The quantity of AI new businesses has likewise been consistently increasing. There are a few UK based organizations, some of which have been set up as a team with UK colleges and clinics. Organizations have been framed between NHS suppliers and AI engineers, for example, IBM, DeepMind, Babylon Health, and Ultromics.

Healthcare Organization – Artificial intelligence can possibly be utilized in arranging and asset assignment in wellbeing and social consideration administrations. For instance, the IBM Watson Care Manager framework is being guided by Harrow Council with the point of improving cost productivity. It matches people with a consideration supplier that addresses their issues, inside their distributed consideration spending plan. It additionally structures singular consideration plans and claims to offer bits of knowledge for increasingly successful utilization of care the executive’s resources. AI is likewise being utilized with the point of improving patient experience. Birch Hey Children’s Hospital in Liverpool is working with IBM Watson to make a ‘psychological medical clinic’, which will incorporate an application to encourage collaborations with patients. The application means to distinguish persistent tensions before a visit, give data on request, and furnish clinicians with data to assist them with delivering suitable medications.

Medical Research – Artificial intelligence can be utilized to dissect and distinguish designs in enormous and complex datasets quicker and more decisively than has recently been possible. It can likewise be utilized to look the logical writing for pertinent investigations, and to consolidate various types of information; for instance, to help sedate discovery. The Institute of Cancer Research’s jars AR database joins hereditary and clinical information from patients with data from logical research and uses AI to make forecasts about new focuses for malignancy drugs. Researchers have built up an AI ‘robot researcher’ called Eve which is intended to make the procedure of medication disclosure quicker and more economical. (K.Williams, 2015) AI frameworks utilized in human services could likewise be significant for restorative research by coordinating reasonable patients to clinical examinations.

Clinical Care – Artificial intelligence can possibly help the analysis of illness and is presently being trialed for this reason in some UK emergency clinics. Utilizing AI to investigate clinical information, examine distributions, and expert rules could likewise advise choices about treatment

PATIENT AND CONSUMER IMPACT

APPLICATIONS – A few applications that utilization AI to offer customized wellbeing appraisals and home consideration exhortation are as of now available. The application Ada Health Companion utilizes AI to work a talk bot, which joins data about side effects from the client with other data to offer conceivable diagnoses. GP at Hand, a comparative application created by Babylon Health, is as of now being trialed by a gathering of NHS medical procedures in London. Information devices or visit bots driven by AI are being utilized to help with the administration of constant ailments. For instance, the Arthritis Virtual Assistant created by IBM for Arthritis Research UK is learning through associations with patients to give customized data and guidance concerning prescriptions, diet, and exercise. (Release, 2017) Government-financed and business activities are investigating manners by which AI could be utilized to control mechanical frameworks and applications to help individuals living at home with conditions, for example, beginning time dementia. Man-made intelligence applications that screen and bolster tolerant adherence to recommended drug and treatment have been trialed with promising outcomes, for instance, in patients with tuberculosis. (L.Shafner, 2017) Other apparatuses, for example, Sentrian, use AI to examine data gathered by sensors worn by patients at home. The point is to identify indications of decay to empower early mediation and avoid medical clinic affirmations.

PUBLIC HEALTH – Artificial intelligence can possibly be utilized to help early location of irresistible malady flare-ups and wellsprings of pandemics, for example, water contamination. (B.Jacobsmeyer, 2012) AI has likewise been utilized to anticipate unfavourable medication responses, which are assessed to cause up to 6.5 percent of emergency clinic affirmations in the UK.

Babylon a UK fire up plans to “put an open and reasonable wellbeing administration in the hands of each individual on earth” by putting man-made brainpower (AI) apparatuses to work. Right now, the organization has activities in the UK and Rwanda and plans to extend to the Middle East, the United States, and China. The organization’s technique is to consolidate the intensity of AI with the medicinal aptitude of people to convey unrivalled access to human services.

How does Babylon’s AI work?

A submitted group of research researchers, architects, specialists and disease transmission experts are cooperating to create and enhance Babylon’s AI capacities. A great part of the collaboration is on the advancement of bleeding edge AI explore; this is being passed through access to enormous volumes of information from the therapeutic network, constant gaining from our very own clients and through input from Babylon’s very own specialists.

The knowledge graph and user graph:

Babylon’s Knowledge Graph is one of the biggest organized medicinal information bases on the planet. It catches human information on present day medication and is encoded for machines. We utilize this as the reason for Babylon’s clever parts to address one another. The Knowledge Graph monitors the significance behind therapeutic phrasing crosswise over various restorative frameworks and various dialects. While the Knowledge Graph gives the general information about medication, tolerant cases are kept in the User Graph. Conjunction of the Babylon Knowledge Graph and the User Graph takes into consideration more revelation. We can coordinate indications with data and results, continually improving the data we give.

The inference engine:

Essentially seeing how clients express their indications and hazard factors isn’t enough to give data on perhaps coordinating conditions. At the core of Babylon’s AI is our surmising motor, an amazing arrangement of AI frameworks, equipped for thinking on a space of >100s of billions of blends of indications, illnesses and hazard factors, every second, to help distinguish conditions which may coordinate the data entered. The surmising motor gives our AI the capacity to give thinking productively, at scale, to carry wellbeing data to millions.

Natural Language Processing (NLP):

Our AI can’t give data to patients on the off chance that it can’t get them, and patients won’t utilize our AI if they can’t get it. To help cross over any barrier, we utilize Natural Language Processing (NLP). NLP enables PCs to translate, comprehend, and afterward utilize each day human language and language designs. It separates both discourse and content into shorter parts and deciphers these increasingly reasonable squares to comprehend what every individual segment means and how it adds to the general importance, connecting the event of restorative terms to our Knowledge Graph. Through NLP our AI can decipher counsels, outline clinical records and visit with clients in a progressively characteristic, human way.

Machine Learning research at Babylon:

All through the Babylon stage we use Machine Learning (ML) for an assortment of undertakings. In the induction motor we consolidate probabilistic models with profound learning methods to accelerate the deduction procedure. In the Knowledge Graph we anticipate new connections between medicinal ideas dependent on perusing restorative writing. In NLP we assemble language understanding models dependent on enormous scale datasets of communications with our clients and information from the web. We use ML to show our NLP framework new dialects.

Babylon would not be achievable without the utilization of cutting-edge ML procedures, so we’ve put fundamentally into building a world class inquire about group in this field. Babylon is additionally quick to contribute back to the AI people group through papers, blog entries, and by publicly releasing a portion of our work to help all.

Services Babylon Offers:

Babylon engineers, doctors, and researchers built up an AI framework that can get information about the manifestations somebody is experiencing, contrast the data with a database of known conditions and sicknesses to discover potential matches, and afterward recognize a game-plan and related hazard factors. Individuals can utilize the “Ask Babylon” highlight to ask about their restorative worries to get an underlying comprehension of what they may be managing, yet this administration isn’t proposed to supplant the mastery of a specialist or be utilized in a health-related crisis.

In quest for its strategic, offers a “converse with a specialist” administration by means of its application, GP at Hand that gives day in and day out access to medicinal services experts through video or sound conferencing. The application can be downloaded from Google Play or the App Store. At the conference, specialists can offer medicinal guidance, answer questions, examine treatment, and can arrange solutions that can be conveyed to a patient’s entryway. All the patient’s clinical records are put away in a safe domain, and their wellbeing history can be gotten to and referenced when it’s required. If a patient needs to return to their arrangement, they can audit the restorative notes and replay an account of the arrangement whenever would not be achievable without the utilization of cutting-edge ML procedures, so we’ve put fundamentally into building a world class inquire about group in this field. Babylon is additionally quick to contribute back to the AI people group through papers, blog entries, and by publicly releasing a portion of our work to help all.

Another feature that is available on the app is Healthcheck. Built with the support of doctors, scientists and disease experts, this AI tool can take answers from questions about family history and a person’s lifestyle and compare it to the medical database to then create a health report and insights to help someone stay healthy.

The beginning up claims that in its own tests, the AI framework was spot on80 percent of the time and that the instrument was never intended to totally supplant the counsel of a genuine specialist, yet to decrease holding up times and to assist specialists with settling on progressively exact choices. The world is confronting an outrageous lack of specialists and medicinal experts, and tech, for example, what Babylon offers is one approach to help improve the social insurance of a great many individuals. As indicated by NHS England, “Every security case [of Babylon] satisfies the guidelines required by NHS and has been finished utilizing a hearty appraisal technique to an elevated expectation.”

While it probably won’t be an ideal framework, Babylon shows that man-made reasoning has sufficiently advanced to work nearby medicinal services experts and can be a useful instrument. Be that as it may, patients still need to stay to be their very own furious social insurance advocates. On the off chance that the guidance got from man-made reasoning doesn’t appear to hit the imprint, it’s a word of wisdom to demand a subsequent supposition—from a human.

AI for the patient and provider

Babylon Health needs everybody with a cell phone to approach moderate medicinal services. They accept an application that offers moment conclusion is the key. As their CEO, Ali Parsa, disclosed to the Telegraph: “[Medical professionals] are the costliest piece of medicinal services. What’s more, the second… is timing… [By] the time [most diseases] present their indications a £10 issue has become a £1,000 arrangement.”

Babylon Health accepts they can drop both of those expenses. Today, Babylon Health offers a free application that makes it basic for clients to follow their wellbeing and counsel their AI-controlled chatbot. For a charge, clients can video-visit with top specialists who can get to that client’s wellbeing records and a lot of exclusive AI-fuelled apparatuses that Babylon Health cases can improve treatment quality. By following the vitals, medicines, and results over an expansive client base, Babylon wellbeing has tapped an unbelievably important dataset. This dataset makes it adaptable to consistently improve their AI’s presentation nearby clients’ wellbeing.

IBM Watson for Oncology has a smaller center: improving the results of disease medications. IBM accepts they can give each restorative expert treating disease a similar knowledge that specialists at top malignancy examine focuses have. IBM has banded together with experts at Memorial Sloan Kettering to prepare their PCs with an abundance of restorative records and research. Propelled in 2016, Watson underpins specialists with tolerant explicit suggestions from bleeding edge medications in a small amount of the time. As indicated by Deborah DiSanzo, the General Manager of IBM Watson Health, Watson for Oncology had just been utilized in the treatment of 16,000 patients by the second from last quarter of 2017. With PCs taking care of the examination, specialists can concentrate on what people exceed expectations at: treating the passionate misery of a patient battling malignant growth.

Data for artificial intelligence is food for thought:

Both IBM Watson and Babylon Health concur: specialists can convey better treatment by gaining from the aftereffects of different patients. Computer based intelligence can gain from chronicled information and figure how a patient’s sickness would react to treatment choices. The two organizations are utilizing AI, a strategy that has gotten synonymous with AI lately. AI is a mechanized method utilized by a PC to encourage itself to settle on choices utilizing preparing information. Preparing information is the fuel of AI, as depicted by Andrew Ng of Stanford University.

Babylon Health and IBM Watson have both structured frameworks that produce this “fuel” from their clients. As they draw in more clients, they will create better bits of knowledge. This system impact is a temperate circle where the item turns out to be better as it includes more clients. The drawback of items with organize impacts is that they are famously hard to kick-start. Simply think that it is so difficult to get the initial barely any individuals for a dating site.

Babylon Health and IBM Watson have each collaborated with set up players to defeat this test and get the fuel they must prepare. Babylon Health is bootstrapping their item with assistance from a UK NHS organization. The UK NHS is looking for approaches to relieve their primary care physician deficiency and will preliminary Babylon’s chatbot for a half year in North Central London, a territory covering 1.2 million residents. IBM Watson is cooperating with Memorial Sloan Kettering to help train Watson on the abundance of clinical data and therapeutic aptitude that the middle is known for.

Regulatory risk: A potential challenge:

With AI-fuelled human services items indicating so a lot of guarantee, one may anticipate that guideline should go rapidly through the FDA. In any case, the FDA is right now battling. As the Wall Street Journal puts it:

“How on earth would you say you will manage programming that learns?”

Current guidelines need principles to evaluate the wellbeing and adequacy of AI frameworks, which the FDA has endeavoured to address by giving direction to surveying AI frameworks. The principal direction orders AI frameworks as “general wellbeing items”, which are inexactly managed as they present okay to clients. The subsequent direction legitimizes the utilization of certifiable proof to evaluate the presentation of AI frameworks. In conclusion, the direction explains the principles for the versatile structure in clinical preliminaries, which would be generally utilized in surveying the working qualities of AI frameworks.

Notwithstanding these difficulties, things are looking bullish for AI-fuelled medicinal services. Babylon Health and IBM are just two of numerous new activities that are expanding the range of medicinal services by strengthening the parts that don’t scale: specialists. While every one of these organizations has their very own perspective on the future, they all concur that AI will let our constrained restorative experts carry the best medicines to the best number of individuals. Particularly when the best treatment is acting before we become ill.

LIMITATIONS OF AI IN THE HEALTHCARE INDUSTRY

Artificial intelligence relies upon advanced information, so irregularities in the accessibility and nature of information confine the capability of AI. Likewise, huge registering power is required for the examination of huge and complex informational indexes. While many are energetic about the potential employments of AI in the NHS, others point to the down to earth difficulties, for example, the way that medicinal records are not reliably digitized over the NHS, and the absence of interoperability and institutionalization in NHS IT frameworks, computerized record keeping, and information labelling. There are inquiries concerning the degree to which patients and specialists are OK with advanced sharing of individual wellbeing data. Humans have properties that AI frameworks probably won’t have the option to genuinely have, for example, compassion. Clinical practice regularly includes complex decisions and capacities that AI as of now can’t imitate, for example, logical information and the capacity to peruse social cues. There is additionally banter about whether some human information is implicit and can’t be taught. Claims that AI will have the option to show self-governance have been addressed on grounds this is a property basic to being human and cannot be held by a machine.

Overall, artificial intelligence advances are being utilized or trialed for a scope of purposes in the field of social insurance and research, including identification of illness, the executives of constant conditions, conveyance of wellbeing administrations, and medication disclosure. Simulated intelligence advances can possibly help address significant wellbeing challenges yet may be restricted by the nature of accessible wellbeing information, and by the powerlessness of AI to have some human qualities, for example, empathy. The utilization of AI raises a few moral and social issues, a significant number of which cover with issues raised utilizing information and human services advances more extensively. A key test for future administration of AI advancements will guarantee that AI is created and utilized in a manner that is straightforward and good with general society intrigue, while animating and driving development in the part.

This article is co-authored by Prof Raul Villamarin Rodriguez, Aakriti Jain, Mohit Mohan Saxena, Epari Shravan and Vaibhav Yadav, Universal Business School.

Categories
Machine Learning

Will Artificial Intelligence reach the level of the human intellect by 2040?

Technological singularity is a hypothesis that predicts that there will come a time when artificial intelligence will be able to improve itself recursively. In theory, machines that are capable of creating other machines even more intelligent, resulting in intelligence far superior to human beings and, which could be even more shocking, beyond our control.

AI, Machine Learning, Neural Networks… these are terms that transmit feelings which are equally of hope and fear of the unknown.

In the next 20 years, there will be more technological changes than in the last 2 millennia. The technology is much faster than the brain – a calculator multiplies 5-digit numbers in tenths of a second – but it works differently, for example, it does not have the level of connections equivalent to that of neurons in a human brain.

However, if the exponential speed of Moore’s law does not stop and the investigations of neural networks of giant corporations such as Google continue to advance by 2040 the degree of technological integration in our lives will far exceed the capacity of the human brain.

The word singularity was taken from astrophysics: a point in space-time – for example, inside a black hole – in which the rules of ordinary physics are not lost. It was associated with the explosion of artificial intelligence during the 1980s by science-fiction novelist Vernor Vinge. At a NASA symposium in 1993, Vinge predicted that in 30 years there would be technological means to create superhuman intelligence called Singleton which refers to a “world order in which there is a single decision-making entity at the highest level, capable of exerting effective control over its domain and preventing internal or external threats to its supremacy”. In addition to this, he assured that, shortly after, we would reach the end of the human era.

Throughout history, some technological advances have caused fear. The fear of the new and the unknown is understandable, however, all technologies can be modified for good or for evil, as you can use fire to heat and cook food, or to burn people

In the case of the singularity, it seems clear that one must be cautious, regulating its development but without limiting it and, above all, trying to ensure that this future artificial intelligence learns from ethical and moral values, as well as from mistakes and successes of the species. We must be clear in our conception of the term. Human beings and machines are meant to co-exist in symbiosis and not rivalry. 

Mortality as an “option” by 2045?

On the other hand, we could analyze if mortality will be “optional” by 2045. Google has already started extravagant research initiatives as they realized that curing aging is possible and that is why they are creating companies such as ‘Calico’ or ‘Human Longevity’, which are investigating it, but also non-profit organizations such as the Methuselah Foundation. It is evident that the possibilities are real since immortality already exists in nature. Some cells are immortal and the stem cells affected the quality of reproducing indefinitely, just like cancer cells.

One of the steps to achieve this is to fully comprehend the structure of incurable diseases today, and then eradicate them. Thus, as it happens with HIV, a controllable chronic disease, or diabetes. We must propose the same with aging: turn it into a controllable chronic disease, and later on, cure it for good. It is essential to begin human trials with rejuvenation technologies that have been shown useful in other animals leading to advancements in human clinical trials as well. 

Prof. Raul V. Rodriguez is an Asst. Professor at Universal Business School.