Tag: machine learning

  • New Deepfake Technique Can Make Portraits Sing

    New Deepfake Technique Can Make Portraits Sing

    Deepfake is a term which was initially coined in the year 2017. It is a technique that utilizes an AI to project combined videos and images onto source images through machine learning. Now, a fresh report from the Imperial College in London and Samsung’s AI research lab in the UK have showcased how a simple image and audio file can be used to make a singing video of a portrait.

    Previously, the technique was used to produce life-like videos from still shots. The researchers are heavily employing machine learning to generate realistic looking results. A trained eye can easily notice the mechanical and almost scary imitation by the AI but it is still a great deal considering the amount of data actually needed is negligible. The technique can even be seen combining a portrait of Albert Einstein, the famous mathematician, to create a unique lecture.

    On a more entertaining side of things, even Rasputin can be seen singing Beyoncé’s iconic “Halo” with comical results. Furthermore, realistic examples are also available with tweaked videos which were generated to mimic human emotions based on the audio that was input into the system. It has become remarkably more simplified with time to produce deepfakes, even if they are not commercially available. 

    Deepfake

    Also ReadSamsung Galaxy Note 10 Will Boast Of A Three Stage Camera Aperture System

    However, people are being reasonably worried regarding the implications of the technique. This can potentially be used to spread large scale false information and propaganda using famous personalities as their templates by those that seek to gain from it. The US legislators have already started noticing the future complications that can potentially be faced. Deepfakes have already caused harm, especially for women, who have had fake pornography surfaced to create an embarrassing spectacle for them. Fortunately or unfortunately, it is still too early to say how much good or harms the technology might end up causing.

  • Alphabet’s New AI Defeats Human Players In A Multiplayer Game

    Alphabet’s New AI Defeats Human Players In A Multiplayer Game

    Concepts easily understood by humans are not as simple to machines. Variables always arise and questions that machines can’t answer make complete autonomy difficult. Alphabet, Google’s parent company had its DeepMind’s technological studio train its AI to learn how to play a game of capture the flag on a level greater than that of a human.

    One of the most basic games in terms of principle, capture the flag has two teams go against each other with the primary objective of the game being to capture a flag (any marker). The marker is located at each teams respective base and has to be captured by an enemy team after which they have to safely return with it to their own base safely. Easy for humans to understand and play, but complex for a machine that has to make numerous calculations that’ll help it strategize in a manner resembling humans.

    This stands to change with AI and machine learning. A report published by researchers at DeepMind, subsidiary of Alphabet, details a system not only capable of learning the game, capture the flag but also devising strategies and planning on a level of human teams in Id Software’s Quake III Arena. The paper published reported on how the AI was not taught how the game played but only informed if the opponent was beaten or not.

    Reasoning behind this approach of training an AI stems from the unpredictable behaviour that can be exhibited as the learning process continues. Some of the researchers working on DeepMind’s AI have previously developed alongside AlphaStar, who’s machine learning program beat professional StarCraft II players. The key techniques utilized in the study was reinforcement learning, which had rewards given out to incentivize the software towards the goal.

    AI

    The agent utilized by DeepMind, appropriately dubbed For The Win (FTW), learns from on-screen pixels using a convolutional neural network, a collection of mathematical functions (essentially neurons in the human brain) arranged in layers. The data absorbed by this is sent to two recurrent long short-term memory (LSTM) networks, one that that operates on a slow timescale while the other on a faster timescale. This enables a degree of prediction about the game world and take actions through an emulated game controller.

    AI

    The 30 FTW AI agents were trained in different stages with the powerful learning paradigm to improve real world performances. Agents there on were reported to have formulated and enacted strategies generalised across the different maps, team rosters, and team sizes. The AI’s learned human behaviours like following teammates, camping, and defending their base from attackers while not repeating tactics that do not give any inherent advantages like following the teammate too close as the training progressed.

    Also ReadCall Of Duty: Modern Warfare Trailer Reverts The Series Back To Its Roots

    The AI had surpassed the win-rate of human players by a substantial margin in a tournament involving 40 humans that were randomly matched in both games as teammates and enemies. The Elo rating (the probability on winning) of the AI was 1,600 compared to good human players that had an Elo of 1,300 while the average for human players was 1,050. This held true even when the agents were slowed down by a quarter of a second. Human players only won 12-21% of the times ranging from skill level.

  • ARM Announces New Design For Upcoming CPU And GPU

    ARM Announces New Design For Upcoming CPU And GPU

    ARM has announced new designs for its premium CPU and GPU chipsets. The company that has its basic architecture utilized by most smartphones in the world. The new chipset design architecture will not be seen in smartphones anytime soon. It can be noted that the first actual chips may be seen by the year end, going by past references. Cortex-A77 CPU with Mali-G77 as reported by the company is said to be a more energy efficient and powerful in machine learning processes.

    It is obvious with its newer iterations that the new Cortex-A77 will feature an overall bump in performance, but ARM has promised a notable 20% IPC performance improvements over its last generation. The new Cortex-A77 claims to have a significantly better machine learning performance made possible by the company’s hardware and software optimizations.

    ARM in their announcement pointed out the fact that most smartphones do not use a dedicated neural processing unit (NPU) for machine learning processes. However, the company also argues that 85%  of smartphones today run machine learning through either only a CPU or a CPU+GPU combination and that even the CPU hands over to the accelerator when it is available, even if there is a GPU or a dedicated machine learning chip.

    ARM promises with its new CPUs, promise more power efficiency along with better raw processing performance. Arm claims to have quadrupled performance since 2013. Another area ARM has chosen to focus on is mobile gaming and with it mobile VR and augmented reality experiences.

    The new Mali-G77 GPU architecture is the first chipset to be based on the company’s proprietary Valhall GPU design that supposedly bumps the performance by 1.4 times, an improvement over the older G76. It is also claimed to be about 30% per cent more energy efficient and 60% faster at running neural net workloads and machine learning.

    Also ReadAMD Reveals 3rd Gen Ryzen CPUs And 7nm Radeon RX5000 GPU

    ARM offers its very own Project Trillium for the machine learning processor, the machine learning computing platform that works in conjunction with the company’s CPUs. Since the announcement of the project in 2018, the company has doubled its energy efficiency and raised the performance up to 8 cores and 32 TOP/s.

    This announcement will set up other companies that utilize the ARM to promptly make optimizations and work around the new designs. 

  • Apple Secretly Bought Danish Visual Effects Startup

    Apple Secretly Bought Danish Visual Effects Startup

    For super secretive Apple, buying a startup is not something new. According to recent reports by a news website, Apple bought image processing software company Spektral for $30 Million. Apparently, they did not disclose the purchase which was made last year. The company is developing a software that can digitally separate a subject from it’s background.

    Also Read: ‘Apple Glass’ Could Be A Reality Soon As Apple Buys AR Centric Longmont

    Apple had introduced ‘Portrait Lighting’ last year which separates a subject and makes the background dark. There seems to be a correlation between the two but we are not entirely sure what they have planned for the future. The unique technology of Spektral is based on state-of-the-art machine learning and computer vision techniques. It can process images as well as video. The software can cut out people from a video in real time on a smartphone. It also promises to provide high quality detail.

    Spektral had created a technique called cutouts which they used in their software. The company was known by the name of CloudCutout at the time. This software negates the need for green screens and creates a mixed reality for the users. The news website also reports that the co-founder and Chief Technical Officer Toke Jansen now lists Apple as his employer. The company website suggests that the software provides a flexible trade-off between quality and computation time.

    Apple’s interest in augmented reality in the recent years has become extremely interesting. It has been pushing developers to create augmented reality apps for it’s iPhones and iPads with ARKit. Some Apple analysts have also hinted that some kind of augmented reality glasses might also be on its way. It can arrive as soon as 2020. Apple’s smart-glasses have the codename “T288”.

  • Google Knows When A Stranger Is Looking At Your Phone

    Google Knows When A Stranger Is Looking At Your Phone

    A lot of experts are touting machine learning and AI as the 2018 flagship smartphone feature, similar to what thin bezels and tall displays was for 2017. Google has kickstarted the trend of AI with its tight integration of Google Assistant and machine learning in the Pixel 2 smartphones.

    Implementing machine learning in day-to-day seemingly trivial situations will be important. That’s where this new software from Google comes into play. If you travel in public transport a lot, you’d be aware of that annoying feeling when you know someone is staring at your phone while you are watching a video or having a private conversation.

    This new software can recognise when a stranger is staring at your phone and will quickly prompt you about it. The software was first spotted by Quartz and is called “e-screen protector”, and is currently in the research stage. It uses your front-facing camera in combination with some face- and gaze-detection algorithms to identify if anyone else is looking at your display.

    As you can see in the demo video, it reacts almost instantaneously to a stranger’s glance on your screen. It then uses a Snapchat-like rainbow filter to tag the stranger and prompt him/her that they’ve been caught. The creators of the software, Google researchers Hee Jung Ryu and Florian Schroff, say the system works in all sorts of different lighting conditions, and has a reaction time of just two milliseconds.

    Considering Google’s ambitions of making machine learning an integral part of its smartphone ecosystem, this feature might make its way to future Pixel phones when Google is confident about its performance.

iGyaan Network
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.