Machine Learning APIs by Example (Google I/O '17)




[ad_1]

Find out how you can make use of Google’s machine learning expertise to power your applications. Google Cloud Platform (GCP) offers five APIs that provide access to pre-trained machine learning models with a single API call: Google Cloud Vision API, Cloud Speech API, Cloud Natural Language API, Cloud Translation API and Cloud Video API. Using these APIs, you can focus on adding new features to your app rather than building and training your own custom models. In this session we’ll share an overview of each API and dive into code with some live demos.

See all the talks from Google I/O ’17 here: https://goo.gl/D0D4VE
Watch more Android talks at I/O ’17 here: https://goo.gl/c0LWYl
Watch more Chrome talks at I/O ’17 here: https://goo.gl/Q1bFGY
Watch more Firebase talks at I/O ’17 here: https://goo.gl/pmO4Dr

Subscribe to the Google Developers channel: http://goo.gl/mQyv5L

#io17 #GoogleIO #GoogleIO2017

Source


[ad_2]

Comment List

  • Google Developers
    January 4, 2021

    The presenter is a badass. Great talk.

  • Google Developers
    January 4, 2021

    The video API seems to be a hybrid of some of the previous APIs in this video.

    It could serve as a good research tool for videos and maybe could translate what happens in a video for people who are visually impaired if teamed up with a speech recognition proxy that translates the API's labels (31:50 left side) into a lexicon that can be heard by the person with the impairment.

    From the research side, somebody doesn't have to drag their pointer on a video timeline that shows thumbnails to get to the good parts of the video.

    The syntax analyzer comes in a distant second and the visual recognition API would have to evolve by recognizing everything on its own while it develops its own parameters of attributes instead of ones that had been "hard coded" in it.
    This means that any anomalies would render some of the already developed metadata parameters it learns with as useless since they are somewhat statically instead of dynamically defined, thus making the AI a "one state intelligence".

    This is the gap that should be bridged to make artificial intelligence mimic actual intelligence more effectively because as it seems now, some external source would still have to define the unlearned parameters for the AI. As with any form of learning, to describe something, it has to be compared to what it is and what it is not redundantly with anomalies in mind.

    To give an AI fully capable deciphering would mean that it has to initially be built with some nice sized data binary chart of absolutes and indefinites to classify all objects so that it can decipher those anomalies independently with little latency in new parameter development during deciphering like a vector machine on steroids if it is made to mimic human thought processes.

    Despite the concern, the Google Plexers deserve a biiiig applause for innovation because they develop a hell of a lot more than I could ever dream up as a full stack developer. 👍👌

  • Google Developers
    January 4, 2021

    {
    “Demo” : “Google ML APIs”,
    “Speaker” : “Excellent”,
    “Contents” : “Awesome”,
    “Score” : 99.9999
    }

  • Google Developers
    January 4, 2021

    Excellent demo. One quick clarification or suggestion: the scores/confidences are at different scales seems at many places. Somewhere they are 0.981234, somewhere it is 9.1234. I’m a big fan of google stuff.

  • Google Developers
    January 4, 2021

    believe in what you know to be possible in this world. ]

  • Google Developers
    January 4, 2021

    Nice presentation

  • Google Developers
    January 4, 2021

    Awesome 🙂

  • Google Developers
    January 4, 2021

    How is this Confidence level returned ? what basis ? and Algorithm does it right… how to improve it ? is this opportunity on ML API library or any other techniques can be used ?

  • Google Developers
    January 4, 2021

    Marissa Mayer destroyed Yahoo.

  • Google Developers
    January 4, 2021

    so U use these tech against us? Make sure u re not a robot? and click these (bricks ) with a Traffic sign? Click on the bricks with ( Cars.) u re not a robot Im not a rob ot?

  • Google Developers
    January 4, 2021

    That Wikipedia page of yours should be up after this

  • Google Developers
    January 4, 2021

    was that in london? been to the one in london last year

  • Google Developers
    January 4, 2021

    This is amazing..

  • Google Developers
    January 4, 2021

    Awesome

  • Google Developers
    January 4, 2021

    I really enjoy watching this video, so there were many useful contents. Thanks, Google.

  • Google Developers
    January 4, 2021

    Amazing Technology

  • Google Developers
    January 4, 2021

    cool stuff

  • Google Developers
    January 4, 2021

    "I built a hell of a speech API using Sox at Google I/O"
    lmao, she made a bootlicking AI

  • Google Developers
    January 4, 2021

    Thanks a lot for the great presentation Sara. The video api and search in video is so cool.

  • Google Developers
    January 4, 2021

    Nice presentation

  • Google Developers
    January 4, 2021

    awesome presentation

  • Google Developers
    January 4, 2021

    thank You

  • Google Developers
    January 4, 2021

    funny how she goes from some random woman to really sexy just because of how she teaches about this topic ^^

  • Google Developers
    January 4, 2021

    Actually, the YouTube auto-generated subtitle did recognize your sentence well! Is it kinda cheating??! hahaha

  • Google Developers
    January 4, 2021

    The face recognition failed to recognize her main future. She looks like a donkey.

  • Google Developers
    January 4, 2021

    Its only a mater of time before this is real time and hooked Augmented Reality contact lenses and we could ask it – Who is that? "That is Matt Damon riding a bike", or "that is a mallard duck with 7 cute little babies". Not to mention with these new brain sensors and Self Learning it could probably just read our thoughts and answer without actually talking ahha

  • Google Developers
    January 4, 2021

    This is some amazing piece.

  • Google Developers
    January 4, 2021

    Convert your live Voice into Text: https://www.youtube.com/watch?v=jc_-AIYvfKs&t
    Open a URL in your Chrome Browser through your voice: https://www.youtube.com/watch?v=bZQlz2shNg8

  • Google Developers
    January 4, 2021

    我摸一摸

  • Google Developers
    January 4, 2021

    Cool

  • Google Developers
    January 4, 2021

    Amazing talk. Thank you Google and thank you Sara !!!

  • Google Developers
    January 4, 2021

    The video recognition API is awesome .But , upload video to server is inefficient with bad bandwidth , is there having a offline API can be used in local environment ,like mapreduce on Hadoop , Spark , etc .

  • Google Developers
    January 4, 2021

    Awesome demo and pretty informative! However, would have been a little nicer if she would have smiled a bit… seemed like a machine taking about machine learning :p 😉

  • Google Developers
    January 4, 2021

    Presenter, Why so serious ?

  • Google Developers
    January 4, 2021

    I watched the whole video, I didn't get much, but man I wish I could understand all that I think it'd be soooo great :')

  • Google Developers
    January 4, 2021

    wow

Write a comment