The past three months, it seems, have been a nonstop parade of major tech events and the new product announcements that come with them.
One of the latest was last week’s AWS re:Invent, where Amazon announced a new suite of AI-enabled technology designed with businesses in mind.
And while these new products, features, and tools come with a host of opportunities for the developers, marketers, and others who plan to use them, they also raise a few questions. What are they designed to do? How can they help you? How do they work?
AWS re:Invent is what Amazon describes as “a learning conference” produced by Amazon Web Services — that’s what AWS stands for. The intended audience is what it calls the “the global cloud computing community,” but the event features content for anyone who wants to learn how cloud technology can help grow and scale a business. From AdTech, to content delivery, to the internet of things, the conference ranges in learning opportunities from keynotes to certification sessions.
2. What Were the Major AI Capabilities Announced at AWS Re:INVENT 2017?
While a handful of new capabilities were unveiled by AWS, there are four that I’d like to focus on (with a bit of teaser text below on what each one is capable of doing):
Comprehend: text analysis for a variety of content formats
DeepLens: a wireless, artificially intelligent video camera with newly enhanced object recognition
Transcribe: speech recognition technology to convert the spoken word to text
Translate: what it sounds like — advanced text translation technology
If some of these sound familiar — or seem reminiscent of similar capabilities previously made somewhat famous by a certain search engine giant — chances are, it’s because they are familiar. I’ll delve deeper into this below, but many of these capabilities reflect those for which Google has been known to be a leader, especially in the realm of translation. After all, who could forget this jaunty promotional video on the topic?
3. What is Deep Learning vs. Machine Learning?
The reason why I want to establish the difference between these two technologies (and what they are, in the first place) is that most of these capabilities use one or both.
Machine learning essentially describes the ability of a machine to learn things — habits, language, behaviors, and patterns, to name a few — without having been programmed to do so, or pre-loaded with that knowledge. It doesn’t describe artificial intelligence in entirety, but rather, is one very important AI capability.
Deep learning is a type of machine learning — and is a bit trickier to explain. Basically, it takes the next (big) step in that it’s designed to imitate the way the human brain works by way of something called neural networks. In our brains, we have biological neural networks in which the action of one neuron creates a series of subsequent actions that ultimately result in our different behaviors.
In technology, artificial neural networks seek to replicate that phenomenon by becoming “trained” to comprehend data based on a certain set of criteria. One of the more notable examples of deep learning in practice is in image recognition, in which neural networks are able to recognize different data patterns or cues to learn what, for example, a cat looks like.
4. What Is Amazon Comprehend?
Amazon Comprehend uses something called natural language processing (NLP) to better understand and determine the meaning within text. It’s an instance of machine learning, in which the technology learns how to comprehend — if you will — and process language as it was intended by the human being speaking or writing it.
Comprehend performs this capability with a series of steps:
Identify the language.
Pick out the phrases that most strongly indicate what the text is about — things like names of people, companies, places, or important events.
Use those phrases to determine if the sentiment of the text is positive or negative.
File that set of text according to its topic within a collection of subjects it has already organized based on patterns it’s observed.
So, how does this technology apply to the real world? Well, it’s particularly helpful in an instance of, say, analyzing written customer feedback. By feeding these comments to an API like Comprehend, marketers can use this technology to synthesize data from their audiences to determine something like thematic areas of improvement.
5. What Is DeepLens?
Simply put, DeepLens is a high definition video camera that was designed with developers in mind. It was built with deep learning capabilities and what AWS Chief Evangelist Jeff Barr describes as “pre-trained models for image detection and recognition.”
In other words, it’s a very smart camera: one that can recognize objects, faces, motions, and creatures (e.g., a dog from a cat). And while that’s very cool — not to mention, somewhat reminiscent of the recently-announced Google Clips camera — there’s a reason why it could prove so helpful to businesses.
To start, DeepLens comes with a number of “templates,” or recognition technologies that users can build upon for their own projects. Object and action recognition, for example, can help to more seamlessly create something like product tutorials or demonstrations, by developing a system or algorithm that learns to recognize how the two are paired for different outcomes.
For example, if you’re demonstrating how a certain cooking appliance can be applied to different scenarios, it seems that DeepLens can be utilized in building a system to recognize the appliance itself (like a standing mixer), the actions the user can take with it (like mixing cake batter), and the resulting outcome (a delicious cake).
Anyone with a journalism background is more than familiar with the headache joy of transcribing spoken interviews. We want to get it just right, be sure not to misquote the interviewee, and communicate what was said in the right context.
If only, back in my earliest days of reporting, there were advanced transcription services available to the common writer.
But now, there’s Transcribe: an AWS service that uses machine learning technologies to recognize the spoken word and transcribe it into text.
While the function itself is fairly intuitive, the benefits might not be. So let’s lay out two instances where technology like this can be applied to a marketer’s world:
Interview transcription. I already covered this a few paragraphs earlier, but let’s say you’re writing up a blog post that requires quotes from a spoken interview. This technology eliminates the time-consuming step of transcribing what was said, and leaves you instead with all of the text from that conversation, allowing you to pick and choose the quotes you want to incorporate.
Video transcription. Accessibility is no longer optional. It’s important to provide a way for individuals who are hearing impaired to be able to consume and enjoy your video content, even without the audio. Transcribing what is said in the video by way of full paragraphs or subtitles allows them to do so — and, it allows those who simply prefer to watch videos without sound to be able to absorb the content, as well.
These are only two of the more prominent examples of how such technology could be applied, but there are many more, from transcribing podcasts, to documenting notes from an important meeting.
7. What Is Amazon Translate?
This development might be my favorite.
Around here, we talk a great deal about approaching marketing with a global mindset. While I might use JetBlue as a remarkable example of marketing, it might not resonate as much with audiences in countries where this airline doesn’t operate.
To put it simply, the internet is a global, international destination. The people reading your content might not regularly engage with the same brands you do, and they might not speak the same language.
That’s why a growing number of developers and marketers are building a multilingual web presence — one where their online properties and content can be seamlessly viewed in the language preferred by the user. It’s a trend that, as HubSpot’s own global presence continues to grow, I take inordinate glee in seeing.
It’s also why I love seeing tools become available that make it easier to approach marketing with a “global first” mindset. Translate is one such tool: a service that uses machine learning to more naturally translate text from one language to another.
Here’s a look at how it worked when translating a French paragraph to English:
So, here’s the million-dollar question: Is Amazon creeping into Google’s territory?
It’s not the definitive answer I hoped to have, but in these early release days, it might be too soon to tell. While most of Google’s headline-making AI developments are largely consumer-centric (like the previous example of Clips), it is true that the company has been working on its own stack of machine learning capabilities for businesses. Look no further than Google.ai, for example, where the mission is to bring “the benefits of AI to everyone” — including, I assume, marketers.
This is only the beginning.
What are you most excited about? What confuses or scares you, and what fills you with delight? Feel free to weigh in with your thoughts on these AWS AI developments on Twitter, or let me know if you have a question about it.