Amazon cover image
Image from Amazon.com

Strengthening deep neural networks : making AI less susceptible to adversarial trickery

By: Publication details: Shroff Publisher, 2019. Mumbai:Description: xiii, 227 p.; pb; 23.00ISBN:
  • 9789352138739
Subject(s): DDC classification:
  • 006.32 WAR
Summary: As Deep Neural Networks (DNNs) become increasingly common in real-world applications, the potential to "fool" them presents a new attack vector. In this book, author Katy Warr examines the security implications of how DNNs interpret audio and images very differently to humans. You'll learn about the motivations attackers have for exploiting flaws in DNN algorithms and how to assess the threat to systems incorporating neural network technology. Through practical code examples, this book shows you how DNNs can be fooled and demonstrates the ways they can be hardened against trickery. Learn the basic principles of how DNNs "think" and why this differs from our human understanding of the world Understand adversarial motivations for fooling DNNs and the threat posed to real-world systems Explore approaches for making software systems that incorporate DNNs less susceptible to trickery Peer into the future of Artificial Neural Networks to learn how these algorithms may evolve to become more robust.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Collection Call number Copy number Status Date due Barcode
Books Books IIT Gandhinagar General Stacks General 006.32 WAR (Browse shelf(Opens below)) 1 Available 029340

As Deep Neural Networks (DNNs) become increasingly common in real-world applications, the potential to "fool" them presents a new attack vector. In this book, author Katy Warr examines the security implications of how DNNs interpret audio and images very differently to humans. You'll learn about the motivations attackers have for exploiting flaws in DNN algorithms and how to assess the threat to systems incorporating neural network technology. Through practical code examples, this book shows you how DNNs can be fooled and demonstrates the ways they can be hardened against trickery. Learn the basic principles of how DNNs "think" and why this differs from our human understanding of the world Understand adversarial motivations for fooling DNNs and the threat posed to real-world systems Explore approaches for making software systems that incorporate DNNs less susceptible to trickery Peer into the future of Artificial Neural Networks to learn how these algorithms may evolve to become more robust.

There are no comments on this title.

to post a comment.


Copyright ©  2022 IIT Gandhinagar Library. All Rights Reserved.

Powered by Koha