Tag: Artificial Intelligence

  • New Deepfake Technique Can Make Portraits Sing

    New Deepfake Technique Can Make Portraits Sing

    Deepfake is a term which was initially coined in the year 2017. It is a technique that utilizes an AI to project combined videos and images onto source images through machine learning. Now, a fresh report from the Imperial College in London and Samsung’s AI research lab in the UK have showcased how a simple image and audio file can be used to make a singing video of a portrait.

    Previously, the technique was used to produce life-like videos from still shots. The researchers are heavily employing machine learning to generate realistic looking results. A trained eye can easily notice the mechanical and almost scary imitation by the AI but it is still a great deal considering the amount of data actually needed is negligible. The technique can even be seen combining a portrait of Albert Einstein, the famous mathematician, to create a unique lecture.

    On a more entertaining side of things, even Rasputin can be seen singing Beyoncé’s iconic “Halo” with comical results. Furthermore, realistic examples are also available with tweaked videos which were generated to mimic human emotions based on the audio that was input into the system. It has become remarkably more simplified with time to produce deepfakes, even if they are not commercially available. 

    Deepfake

    Also ReadSamsung Galaxy Note 10 Will Boast Of A Three Stage Camera Aperture System

    However, people are being reasonably worried regarding the implications of the technique. This can potentially be used to spread large scale false information and propaganda using famous personalities as their templates by those that seek to gain from it. The US legislators have already started noticing the future complications that can potentially be faced. Deepfakes have already caused harm, especially for women, who have had fake pornography surfaced to create an embarrassing spectacle for them. Fortunately or unfortunately, it is still too early to say how much good or harms the technology might end up causing.

  • Adobe Trains AI To Undo Facial Manipulations In Photoshop

    Adobe Trains AI To Undo Facial Manipulations In Photoshop

    With the increase of fake and manipulative images on the internet, online frauds and scams have been rising in number. One of the most widely used image editing applications, Adobe Photoshop may receive a feature soon that will enable users to uncover any changes that have been previously made to images. This will be enabled with the help of Artificial Intelligence or AI. 

    Adobe Photoshop

    Adobe researchers collaborated with UC Berkeley students to train AI so that it can detect facial manipulation in images edited with Adobe’s Image editing software. This will be made possible with the help of Photoshop’s ‘Face Aware Liquify’ feature. It uses the same technique which is used in forensics to match the faces in the database with real-time images. This technique, once incorporated, will be a major step towards democratising image forensics. 

    Previously, Adobe research laid its emphasis on splicing, removal and cloning. On the other hand, Adobe Photoshop’s new Face Aware Liquify feature focuses more on detecting modifications to facial features, for instance, facial expressions. As per Adobe, the three main terms that are considered while researching the images are as follows:

    • Can the image detection tool detect any form of manipulated faces?
    • Can that tool decode the specific manipulation made to the image features?
    • Can the changes be reversed towards the original image?

    Adobe Photoshop

    Also Read: Weekend Netflix Playlist: Must Watch Shows

    With the implementation of deep learning, researchers developed a huge training set of images, which was then scripted with the help of the Photoshop. Noteworthy, the image data set was created by scraping thousands images off the internet. Facial warping and other distortions were applied to the images, which were easily detected by the AI. This feature will greatly help restore the faith of people in Social media, the company states. Profiles can be more accurately analysed with the new feature that may be integrated with Adobe Photoshop in the upcoming updates. 

     
  • Alphabet’s New AI Defeats Human Players In A Multiplayer Game

    Alphabet’s New AI Defeats Human Players In A Multiplayer Game

    Concepts easily understood by humans are not as simple to machines. Variables always arise and questions that machines can’t answer make complete autonomy difficult. Alphabet, Google’s parent company had its DeepMind’s technological studio train its AI to learn how to play a game of capture the flag on a level greater than that of a human.

    One of the most basic games in terms of principle, capture the flag has two teams go against each other with the primary objective of the game being to capture a flag (any marker). The marker is located at each teams respective base and has to be captured by an enemy team after which they have to safely return with it to their own base safely. Easy for humans to understand and play, but complex for a machine that has to make numerous calculations that’ll help it strategize in a manner resembling humans.

    This stands to change with AI and machine learning. A report published by researchers at DeepMind, subsidiary of Alphabet, details a system not only capable of learning the game, capture the flag but also devising strategies and planning on a level of human teams in Id Software’s Quake III Arena. The paper published reported on how the AI was not taught how the game played but only informed if the opponent was beaten or not.

    Reasoning behind this approach of training an AI stems from the unpredictable behaviour that can be exhibited as the learning process continues. Some of the researchers working on DeepMind’s AI have previously developed alongside AlphaStar, who’s machine learning program beat professional StarCraft II players. The key techniques utilized in the study was reinforcement learning, which had rewards given out to incentivize the software towards the goal.

    AI

    The agent utilized by DeepMind, appropriately dubbed For The Win (FTW), learns from on-screen pixels using a convolutional neural network, a collection of mathematical functions (essentially neurons in the human brain) arranged in layers. The data absorbed by this is sent to two recurrent long short-term memory (LSTM) networks, one that that operates on a slow timescale while the other on a faster timescale. This enables a degree of prediction about the game world and take actions through an emulated game controller.

    AI

    The 30 FTW AI agents were trained in different stages with the powerful learning paradigm to improve real world performances. Agents there on were reported to have formulated and enacted strategies generalised across the different maps, team rosters, and team sizes. The AI’s learned human behaviours like following teammates, camping, and defending their base from attackers while not repeating tactics that do not give any inherent advantages like following the teammate too close as the training progressed.

    Also ReadCall Of Duty: Modern Warfare Trailer Reverts The Series Back To Its Roots

    The AI had surpassed the win-rate of human players by a substantial margin in a tournament involving 40 humans that were randomly matched in both games as teammates and enemies. The Elo rating (the probability on winning) of the AI was 1,600 compared to good human players that had an Elo of 1,300 while the average for human players was 1,050. This held true even when the agents were slowed down by a quarter of a second. Human players only won 12-21% of the times ranging from skill level.

  • Google Trained Its AI To Learn Depth Perception From YouTube’s Mannequin Challenge

    Google Trained Its AI To Learn Depth Perception From YouTube’s Mannequin Challenge

    Google’s latest blog post had focused on how depth perception works in videos where both the subject and the camera are in motion. A huge amount of data and footage was required for the study that aimed at training an AI. The foundation for the AI’s training was to detect scenes where the camera was in motion but the subject remained stationary.

    Remarkable resourcefulness was shown as Google used its own source that was perfect for the job. YouTube (Google’s video sharing platform) had a vast amount of footage that fit the study’s required parameters. The Mannequin Challenge had a person and more often a group of people which stays perfectly still, anywhere from sitting to the bizarre handstands, while one person pans the camera around the still subjects (Mannequin’s) position.

    To train the AI to detect human figures in various different scenes, Google used approximately 2000 videos from YouTube on the Mannequin Challenge. Interestingly, when depth perception comes into play multiple cameras are generally utilized from different angles to sense depth. Google, on the other hand, taught its AI to create depth maps from footage that only had one view, i.e perception from a single camera unit.

    Google

    With the Pixel line up of Google smartphones, the company has already achieved similar results with its still images on portrait mode effect that produce the bokeh effect. The benefit of the study extends towards the augmented reality that the company is known to dabble with, Playmojis from Google’s Playground is one such example.

    Also ReadHuawei’s Recess Could Provide Opportunities For Its Competitors In India

    If the feature arrives on videos, it’ll open up new and previously impossible effects in video capture such as live bokeh effect similar to the one found in most smartphones. This may also one day lead to 3D images and 2D scenes being shot on smartphone cameras. Google’s progress on the software spectrum proves that just hardware improvements are not always what lead to great strides in photography and videography.

  • Samsung’s Deepfake AI Is Capable Of Creating A Convincing Video Out Of A Single Frame

    Samsung’s Deepfake AI Is Capable Of Creating A Convincing Video Out Of A Single Frame

    Artificial Intelligence (AI) is expected to be the next big thing in technology in the near future. The idea of human-made objects able to exist as sentient beings capable of making decisions is both optimistic and terrifying at the same time. In a development bordering between the aforementioned emotions; Researchers at the Samsung AI Centre in Moscow, Russia have developed a Deepfake algorithm that can create a video out of a single image. These “Living portraits” as they are called have been tested on some famous historical images and the results are well, surprisingly great.

    Samsung Deepfake AI

    As mentioned previously, Samsung has developed and had (kind of) successfully tested the algorithm. In a video shared on YouTube, the AI can be seen in action. The researchers refer to this process as few/one-shot learning. In this process, a model can be trained by using only an image to chart out a convincing animated portrait shot. While this works with a single image, using more images, up to 32 improves the results even more. This process of superimposing existing images and videos onto source images or videos using a machine learning technique is generally known as Deepfakes. This has been an avid problem on the internet today which is credited to the rise of fake news.

    Samsung Deepfake AI

    Also read: LEGO’s New Team-Battle RPG Announced By Gameloft For iOS And Android

    These realistically talking head models are generated using a convolutional neural network. By training the algorithm on a large dataset of talking head videos with a variety of appearances. The Samsung researchers have been reported to have used more than 7,000 publically available images from YouTube videos. This technique generated Deepfakes of historical personalities such as famous scientist Albert Einstein, Painter Salvador Dali and the mysterious woman from the Mona Lisa portrait of Leonardo Da Vinci. And while there are a few anomalies that give away the obviously fake nature of the video, it is an example of how far the technology can go. In the meanwhile, we will just leave this GIF of expressive Mona Lisa for you to ponder.

  • AI Reveals Why Our Eyes Are Drawn Towards Specific Shapes And Colours

    AI Reveals Why Our Eyes Are Drawn Towards Specific Shapes And Colours

    It has been known for a long time now that neurons in the brain respond differently to different objects, images and colours. This is an instinct that is key to our survival, and our brain helps us in doing so by interpreting every cue differently. For instance, it has been deduced that in the inferior temporal cortex, some special visual neurons fire more when they detect specific images, text or certain stimuli. A small study led by investigators in the Blavatnik Institute at Harvard Medical School has used an AI system that can help in determining what our neurons are interested in looking at. 

    Previously, similar experiments were hosted to figure out the same question, but the difference was that they used real images. Instead, Harvard investigators overcame this hurdle using synthetic images that were custom tailored to each specific neuron’s preference. Where real images are limited to stimuli present in the real world,  AI-generated images changes the whole scenario. 

    Research Technique

    Will Xiao is a graduate student in the Department of Neurobiology at Harvard Medical School, and he was the one who designed the software that implemented artificial intelligence (AI) to formulate images based on neural responses. The responses were obtained from six Rhesus macaque monkeys. The animals were shown images in 100-milliseconds blips, which were generated by the program originally created by Will Xiao. Various shapes and colours were gradually introduced via the software, which gradually morphed into a final image. At the end of each such experiment, the program generates a super-stimulus for the aforementioned neurons.

    Also Read: Microsoft Launches The HoloLens 2 Developers Edition

    The results of these experiments were then analyzed, and it was deduced that they are consistent over separate runs, indicating that the neurons actually respond differently to separate stimuli. This could be the key to comprehend various cognitive issues,  for instance, autistic disorders. By studying those cells which respond preferentially, researchers can figure the reason for negative social impacts which lead to partial development of neurons.

  • Is Artificial Intelligence Digging Its Roots Deeper Into Our Reality?

    Is Artificial Intelligence Digging Its Roots Deeper Into Our Reality?

    Computers will overtake humans with AI within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours – Stephen Hawking

    It’s impossible to exist in this world without admitting to the fact that Artificial Intelligence is slowly taking over the world. One machine at a time, byte streams are making their way to the cores, learning how they function, predicting future events (and accurately at that), and providing solutions that may seem unorthodox, but are eventually for the better. And instead of veering clear of this, we live in a misconception that we are enslaving Artificial intelligence for our needs. A better look suggests otherwise-AI is steadily breaking free of constraints and taking a road that has a “Dead End” signboard staring at us straight in the face.

    How Is AI Affecting Humanity?

    We are enabling AI enabled machines to make decisions on our behalf all the time. It is customizing our music playlists as per our taste because we are too lazy to do so. Applications are available which can scrutinize your resumes and documents, suggesting necessary changes which are actually for the better. Machine learning algorithms browse through the pictures on your phone, looking for patterns that help them to filter out your best images. And all this happens in a matter of seconds. We gape with surprise at such instances of technology. We are impressed because our lives are becoming easier. Little do people realize that an algorithm which can shuffle through thousands of pictures to pick a perfect image can do much more if it’s given the necessary amount of time. 

    Setting The Necessary Constraints

    The question that arises in such a case is very simple. Where does this stop? Where do we set the limits for the applications of Artificial Intelligence? Should we do it as per our convenience or should we have larger interests in view? Elon Musk, CEO and Founder of SpaceX, Tesla and the recently unveiled project- ‘Hyperloop’ addresses the exponential growth of machine learning algorithms in our lives. “Unless you have direct exposure to groups like Deepmind, you have no idea how fast is AI developing”. Deepmind is the world leader in artificial intelligence research and its application for positive impact. Elon also mentions that the risk of something seriously dangerous happening is increasing every second, with AI revealing its dark side before 10 years from now.

    The Areas At Grave Risk

    Artificial IntelligenceMentioned below are some of the major risks that accompany fully autonomous systems.

    • Social Manipulation: WhichFaceIsReal is a website which asks users to differentiate between real faces and AI generated faces. Almost all the users get more than half the answers wrong, which is a clear indication of the fact that AI is very, very capable of meddling around with your minds. The next stranger you see on social media may have a computer generated image. How would that make you feel?
    • Autonomous Weaponry: Weapons that identify targets and shoot them down are not even a science fiction gimmick anymore. Lethal Autonomous Weapons or LAWs do exist, and are as dangerous as you might think they are. On the surface, such weapons may seem like the perfect substitutes to human soldiers. No risk equals better and more efficient results. But what if an enemy or a hacker gets through the security system that protects such a weapon? Needless to say, the results to such an incident can be nothing short of catastrophic.
    • Misunderstanding Commands: Misalignment of goals and instructions between the human and the machine may sound like something petty, but in reality, it can be fatal. Say its 2030. Take an example of a businessman who’s late for a meeting. He instructs his self-driving car to get him to the venue as soon as possible. The goal is clear to the machine. It may accomplish what the person wanted. But in doing so, it may have ignored other factors which may have proven lethal to someone else on the road. Artificial intelligence is not half as considerate as you thought it was.
    • Monitoring Your Activities: If you use any device implementing Machine learning, it will identify your day-to-day patterns and suggest you routines that you may tend to follow, simply because you were the one who implemented the algorithms in the first place. Without realizing, Artificial Intelligence has modified you, a human, to follow its instructions.

    There is no doubt regarding the fact that machine learning is necessary in order to explore a lot of areas unknown to mankind. Space exploration, fusion theories, advanced healthcare, DNA syndromes are just some of the areas in which Artificial Intelligence can prove its worth. Another quote from the late Stephen Hawking reads

    AI is likely to be either the best or the worst thing to happen to humanity

    Also Read: Tesla Will Unveil The Model Y SUV On March 14

    It’s entirely up to us, the human race how it makes use of this amazing invention. Will we rule the algorithms or let the algorithms rule us?

     

  • Antutu Releases AI (Artificial Intelligence) Benchmarking Tool

    Antutu Releases AI (Artificial Intelligence) Benchmarking Tool

    Artificial Intelligence (AI) is becoming popular by each day. More and more companies are trying to offer some or the other AI centric feature in their smartphones. Antutu, the popular benchmarking tool to test the performance of a device, has released a new app called Antutu AI Review, which is meant to differentiate the AI performance of a device by giving it a score.

    Antutu, in its blog post mentioned “In order to let everyone have an intuitive judgment on the AI ??performance of their mobile phones, Ann Bunny officially released the Antutu AI Review public beta, providing a quantifiable standard for everyone to judge the difference in AI performance of different platforms.” 

    Due to there being a difference in what AI means for different SoC makers, Antutu AI Review has tried to establish a unified standard for testing through cooperation with the above manufacturers. The test is divided into two sub-items, namely image classification and object recognition.Among them, the image classification is based on the Inception v3 neural network, the test data is 200 pictures; and the object recognition is based on the mobilenet ssd neural network, the test data is a 600-frame video. The final score is calculated based off accuracy and speed. The higher the both, the higher is the final score.

    There is a catch however, if the speed is high but the accuracy of the image recognition isn’t, the benchmark will give a lower score. This is to prevent cheating as the device won’t be able to ramp up the speed and give wrong answers to get a higher score.

    Also Read: Samsung Galaxy M10 Unboxing And Hands-On

    The AI Benchmarking tool by Antutu seems like a good idea at the right time as the war for AI is heating up and it would be interesting to see which companies are serious about it and what are just providing them as a gimmick. The app is available to beta test as an APK file and will be released for public soon.

     

  • MIT Researchers Use Reddit To Create An AI That Only Thinks About Murder

    MIT Researchers Use Reddit To Create An AI That Only Thinks About Murder

    Artificial Intelligence is one of those terms that induces a polarising reaction from different people. Some believe it’ll make our lives easier and others believe we will soon be an episode of Black Mirror thanks to AI. For tech companies, however, Artificial Intelligence is their new favourite phrase. Google, for example, probably set the world record for using the phrase “AI” the most number of times in an hour at Google I/O 2018. Ever since the Pixel 2 devices were launched, Google has not shied away from using this term in almost everything it does.

    While AI assistants like Alexa, Google Assistant do make our lives easier to a certain extent, nobody wants a psychopath AI in their lives. Anyone who has watched Altered Carbon can imagine how creepy and intrusive AI can be if we keep making advancements in the field without keeping a check on it. Researchers at Massachusetts Institute of Technology (MIT) unveiled their new creation, a disturbed AI named Norman. Yes, this AI is named after the “lovable” character from the 1960 film, Psycho. The researchers write:

    Norman is an AI that is trained to perform image captioning, a popular deep learning method of generating a textual description of an image. We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death. Then, we compared Norman’s responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders.

    Essentially, this AI has very disturbed responses when compared to a general AI. While the Rorschach test has its own doubters whether it is a valid way to measure a person’s psychological state, Norman’s responses don’t need a test to be labelled creepy. The image below captures the level of effect that Reddit thread had on Norman’s ability to perceive images.

    The researchers have said that the aim of the experiment was to show how easy it is to bias any AI if you train it on biased data. This experiment raises some telling points about AI and its rapid advancements. Google recently came under scrutiny after it demoed the Google Assistant fooling human beings in a telephonic conversation. Google has since conceded that when this feature rolls out, the Google Assistant will inform the person on call that they’re talking to Google Assistant.

  • AI Was Used To Make Barack Obama Deliver This Announcement

    AI Was Used To Make Barack Obama Deliver This Announcement

    When Google launched the Pixel 2 and Pixel 2 XL back in October 2017, the company talked a lot about Artificial Intelligence. Google said that the integration of AI and machine learning within a smartphone is the future and it might as well be right. Artificial Intelligence has grown leaps and bounds over the past few years and is now readily available in most smartphones in the form of a digital assistant like Siri or Alexa. Jordan Peele, the famous Oscar-winning comedian and filmmaker decided to do give us a glimpse of what advancements in AI can produce in the near future.

    In a video of Barrack Obama talking about everything from Black Panther to Donal Trump, Jordan Peele delivers a telling PSA about the future of fake news. The “fake video” was made by Peele’s production company using two tools, Adobe After Effects and the AI face-swapping tool FakeApp. Considering the technology used in this video is still in nascent stage, the authenticity of it is really hard to judge.

    Since the dawn of social media, we have been plagued with fake news. There have been multiple reports of bots spreading a certain fake news or agenda across the internet by making it go viral. While a photoshopped image or fake tweet can still be recognised by people with a keen eye or an open mind, a carefully doctored video using AI can be harder to spot. Adobe, the creator of Photoshop, is already working on an audio editing software called VoCo. It is like Photoshop, but for words, wherein users can generate new words using a speaker’s recorded voice. This can be a groundbreaking addition to the already prominent fake news industry.

    The process of making fake videos, which was a complicated job in the past, can soon become a hobby of a teenager with a computer. All of that will be possible because of the advancements in AI. As is with any form of technology, AI, if used maliciously, can create a lot of trouble especially when doctored videos can go viral in the blink of an eye.

    As Peele says in the end, we need to be more vigilante and trust news outlets when it comes to important topics. Fake news is easy to spread, but, it won’t be of any use if we are careful enough to not believe and share everything we read on the internet. While scientists work on tools to spot fake AI videos, let’s make sure that we question every provocative video on the internet and find a credible source before we hit the share button.

  • Amazon Alexa Moves One Step Closer To Human-Like Interaction With New Feature

    Amazon Alexa Moves One Step Closer To Human-Like Interaction With New Feature

    Artificial Intelligence is great. The way it has grown over the years and especially the implementation of it with our smartphones and smart home devices is really impressive. While people joke about how AI is still very robotic and not close to being human-like, Amazon has taken the first step in fixing that. It has announced the launch of a follow-up mode for its voice assistant, Alexa.

    Alexa, now present in everything from smartphones to smart speakers, is a great tool for people with smart home devices. However, asking questions to Alexa, or pretty much any AI assistant can be very monotonous and robotic. For every time, you need to say the keyword like “Hey Alexa” or “OK Google” to make them listen to you.

    With the new mode, users can ask follow-up questions to Alexa without saying the keyword every time. Alexa will now listen for five seconds after the first response. If the Alexa-powered device is ready for a follow-up question, the blue indicator light will flash. This will be an indication for the user to ask their question. For people paranoid about their devices listening to them all the time can switch off this feature. Also, if no follow-up question is asked within the time frame, the device will restore to the sleep mode.

    Amazon has also clarified that it won’t listen to you all the time. It claims that Alexa will not respond if it isn’t “confident you’re speaking to her”.

    For example, if she detects that speech was background noise or that the intent of the speech was not clear.

    This new feature is in the right direction for AI enthusiasts. However, people like Elon Musk have been vocal about their apprehensions related to Artificial Intelligence. The new feature won’t go down well with people who don’t like AI as of now.

     

  • Tesla Is Working On Its Custom AI Chips Says Elon Musk

    Tesla Is Working On Its Custom AI Chips Says Elon Musk

    AI is the next big thing in the tech world, and big companies like Apple and Google have embraced that. According to multiple reports, Tesla CEO Elon Musk was talking up the company’s custom AI chips at machine learning conference NIPS, telling attendees that Tesla is “developing specialized AI hardware that we think will be the best in the world.”

    According to The Register, Musk said, “I wanted to make it clear that Tesla is serious about AI, both on the software and hardware fronts. We are developing custom AI hardware chips”.

    This report corroborates another report from back in September which claimed that about 50 people in Tesla are working on a custom AI chip. These include respected industry veteran Jim Keller, who previously worked at AMD and Apple, and joined Tesla in January 2016 as vice president of Autopilot Hardware Engineering.

    Currently, the company uses Nvidia’s graphics cards to power its self-driving functionality but, with its own AI chip, it will be able to produce the main chip for its self-driving cars in-house. It would also mean improved performance since the company would be able to customise the chip according to its own usage.

    AI chips might Tesla to achieve its goal of full autonomy faster than one would have expected. At the same conference, Musk conceded his ambitious timeline of two years to get to Level 5 self-driving, that is the level at which humans can go to sleep in the back seat.

    These tweets from AI researcher Stephen Merity claim that Elon Musk predicts that AI might become exponentially smarter than humans in just five to ten years.

  • AlphaGo AI Beats Lee Se-dol 3-0 to Win Google DeepMind Challenge Series

    AlphaGo AI Beats Lee Se-dol 3-0 to Win Google DeepMind Challenge Series

    The age old question whether Artificial Intelligence will one day take over the world has perhaps been answered. If you’ve been following the internet past few days, you will have heard that Google’s AI AlphaGo has been going one-on-one with 18-time world Go champion Lee Se-dol in a five match DeepMind Challenge Series. Today, in what is a historic victory for AI, AlphaGo beat Lee Se-dol 3-0 to clinch the series. Lee conceded the game after 176 moves.

    AlphaGo is a program that was developed by DeepMind, a British AI company acquired by Google two years ago. Go is an ancient Chinese game that has been considered one of the toughest ever for an AI to beat at a world-class level. Go has been one of the most sought after AI challenges for its simplicity and elaborate possibilities.

    AlphaGo

    Lee Se-dol’s loss to the AI three times in a row proved what many had felt unimaginable. Google has proved that AlphaGo is the next level of artificial intelligence. The third game further established the AI’s superiority. It was able to navigate tricky situations known as ko that didn’t come up in the prior two matches.

    Now that AlphaGo has beaten Lee in three games, we wait to see if Lee can manage one win in the two games still left to play. If AlphaGo bases its moves on how Lee plays, then this means that the AI has been improving in each game, which would make the next one even harder for Lee to beat.

    “I don’t know what to say and I would like to say sorry because I couldn’t show you my better results,” he said. “I think even if I go back I probably cannot defeat AlphaGo, and I think the competition was already settled in the second match.”

    He said, “Humans have psychological issues while AlphaGo does not. Nevertheless, I think I will be able to discover AlphaGo’s weak points in the fourth and fifth match.

  • A.I based Doctor”Babylon” To Help Keep Humans Healthy

    A.I based Doctor”Babylon” To Help Keep Humans Healthy

    Artificial Intelligence, as many analysts predict, is the next frontier for the tech industry. As it stands today, scientists and researchers are putting in long hours to make the dream of machines with “human-like intelligence” a reality rather than just a possibility.

    Intelligent machines and softwares which possess deep human-like knowledge are growing in numbers leading to them being employed for use in a number of tasks. From hospitality to defence, use of artificial intelligence has been explored in various fields, latest of which will now be health-care.

    Babylon

    Babylon, a U.K.-based subscription health service will soon launch an AI-based app designed to provide medical diagnosis to patients just like a real life doctor would. The idea behind Babylon is to create an artificially intelligent ‘doctor’ which will help humans stay away from illnesses.

    For this, Babylon will walk the extra step that your family doctor won’t, and track your daily habits, collect data about heart rate, diet and medical records and then cross reference them with the symptoms that the patient reports to Babylon. Upon doing all this, Babylon will scan through its vast database and diagnose the illness and offer an appropriate course of action.

    The idea of Babylon is comparable to IBM’s Watson Smart A.I tech, which is already employed at the Memorial Sloan-Kettering Cancer Center in New York. Watson Smart assist’s doctors in their work by going through a database of 600,000 medical evidence reports, 1.5 million patient records and clinical trials, and two million pages of text from medical journals to help doctors develop treatment plans tailored to patients’ individual symptoms, genetics, and histories.

    The founder of Babylon, Parsa reveals that the A.I will not look to replace human doctors by acting as a substitute, but would rather work by keeping a close eye on the users health and help users prevent illnesses before they happen.

    Talking about the A.I driven app, Parsa said, “For example, if your heart rate is faster than normal and your physical activity hasn’t increased, it’s a sign you’re either stressed or dehydrated or you’re fighting something. The platform can bring this to your attention and suggest the best course of action to fight the illness before it surfaces.” The app will also remind patients to take their medication, and follow up to find out how they’re feeling.

     

     

     

  • Hilton Hires a Robot Concierge Powered by IBM’s Watson Smart AI

    Hilton Hires a Robot Concierge Powered by IBM’s Watson Smart AI

    Artificial Intelligence over the past couple of years has progressed at a rapid pace. Be it Robots like Pepper which possess the capability to understand human emotions or simple pieces of intelligent code such as Siri which can based on your past behaviour patterns assist you in your day to day tasks, artificial intelligence is slowly but surely taking center stage in our lives today.

    In a continuation of this trend, robots with artificial intelligence are now making way to hotels near you. Hilton, the famous hotel chain, has partnered with IBM to create a robotic concierge that it will fondly call “Connie” in memory of its founder, Conrad Hilton.

    https://www.youtube.com/watch?v=jC0I08qt5VU

    Connie is a Nao robot, a French 58cm-tall bot which is powered by IBM’s AI tech, Watson Smart. IBM claims that this Robo Concierge can understand speech, and the tech giant further claims that it enables the ‘bot to greet guests, answer questions about the hotel, and provide details about local services, sights and restaurants.”

    Talking about the move, Rob High, chief technology officer of Watson said in a statement,

    “This project with Hilton and WayBlazer represents an important shift in human-machine interaction, enabled by the embodiment of Watson’s cognitive computing. Watson helps Connie understand and respond naturally to the needs and interests of Hilton’s guests—which is an experience that’s particularly powerful in a hospitality setting, where it can lead to deeper guest engagement.”

    Just in case you were wondering, Hilton’s friendly Robo Concierge, Connie, is not the first robot to to be employed by a hotel. Japanese hotel, the Hen-na in Nagasaki has a staff which is majorly manned by robots, speaking both Japanese and English.

     

     

iGyaan Network
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.