Deepfake is a term which was initially coined in the year 2017. It is a technique that utilizes an AI to project combined videos and images onto source images through machine learning. Now, a fresh report from the Imperial College in London and Samsung’s AI research lab in the UK have showcased how a simple image and audio file can be used to make a singing video of a portrait.
Previously, the technique was used to produce life-like videos from still shots. The researchers are heavily employing machine learning to generate realistic looking results. A trained eye can easily notice the mechanical and almost scary imitation by the AI but it is still a great deal considering the amount of data actually needed is negligible. The technique can even be seen combining a portrait of Albert Einstein, the famous mathematician, to create a unique lecture.
On a more entertaining side of things, even Rasputin can be seen singing Beyoncé’s iconic “Halo” with comical results. Furthermore, realistic examples are also available with tweaked videos which were generated to mimic human emotions based on the audio that was input into the system. It has become remarkably more simplified with time to produce deepfakes, even if they are not commercially available.

Also Read: Samsung Galaxy Note 10 Will Boast Of A Three Stage Camera Aperture System
However, people are being reasonably worried regarding the implications of the technique. This can potentially be used to spread large scale false information and propaganda using famous personalities as their templates by those that seek to gain from it. The US legislators have already started noticing the future complications that can potentially be faced. Deepfakes have already caused harm, especially for women, who have had fake pornography surfaced to create an embarrassing spectacle for them. Fortunately or unfortunately, it is still too early to say how much good or harms the technology might end up causing.













Previously, similar experiments were hosted to figure out the same question, but the difference was that they used real images. Instead, Harvard investigators overcame this hurdle using synthetic images that were custom tailored to each specific neuron’s preference. Where real images are limited to stimuli present in the real world, AI-generated images changes the whole scenario.
Also Read: 
We are enabling AI enabled machines to make decisions on our behalf all the time. It is customizing our music playlists as per our taste because we are too lazy to do so. Applications are available which can scrutinize your resumes and documents, suggesting necessary changes which are actually for the better. Machine learning algorithms browse through the pictures on your phone, looking for patterns that help them to filter out your best images. And all this happens in a matter of seconds. We gape with surprise at such instances of technology. We are impressed because our lives are becoming easier. Little do people realize that an algorithm which can shuffle through thousands of pictures to pick a perfect image can do much more if it’s given the necessary amount of time.
The question that arises in such a case is very simple. Where does this stop? Where do we set the limits for the applications of Artificial Intelligence? Should we do it as per our convenience or should we have larger interests in view? Elon Musk, CEO and Founder of 
Mentioned below are some of the major risks that accompany fully autonomous systems.



Among them, the image classification is based on the Inception v3 neural network, the test data is 200 pictures; and the object recognition is based on the mobilenet ssd neural network, the test data is a 600-frame video. The final score is calculated based off accuracy and speed. The higher the both, the higher is the final score.










