The Software Society

How digital technology is changing our culture and economy

How can you be part of “the next big thing”?

The Software Society and this blog claim that the “next big thing” driving technology expansion and economic growth will be a tightening connection between people and computers, largely through advances in language understanding technology (speech or text). The expected proliferation of general and specialized “personal assistants” that make us more efficient is one dimension of this trend.

But, if this is the next big thing, where does that impact you personally? You can of course take advantage of the trends by using the results, and you will–if my projections are even partially correct.

Can you go a bit deeper and use the trends to improve your current business or create a new one? Is this technology so specialized that you don’t have a chance without being an expert in language technology or programming skills?

When the Web began to grow, most believed the talent to create a Web site was too specialized for web sites to grow exponentially, but they did. The impetus in part was partly programming tools that eased the job for experts, other tools that allowed creating basic Web sites without programming skills, and the growth of experts in the field who could provide help for hire. Other than developing web sites, services such as Google AdWords allowed the publication of ads for your business online that fit the sites where they were posted with minimal effort (other than the cost).

These same factors are driving the growth of the use of language and mobile technology that will allow an explosion of personal assistants and other intuitive shortcuts to getting results from our tools and devices. Interactive ads (including speech interaction) are one evolution of advertising that will engage customers on mobile devices.

With a growing number of tools and available technical support, creativity and/or expertise in specific areas will be key drivers of growth in this technology revolution. Depending on your position and business, you could consider:

  • Managing or helping to develop a company-specific personal assistant, with either a customer service focus or a marketing focus or both.
  • Preparing voice-interactive mobile or web-based ads for your company or as a service.
  • Developing techniques for presenting your company’s or your web site’s data to personal assistants so that they could answer queries more succinctly.
  • Designing creative content for a voice-interactive interactive application or personal assistant.
  • Creating content for interactive applications or specialized personal assistants, both to deliver entertainment and information.
  • Using text and speech analytics tools to understand “big data” and gain actionable intelligence or to provide concise answers to inquiries from that data.
  • Becoming an expert in the underlying technology to help others do the above.

This is just the beginning. These new technologies will change business communication like social media changed communication before that, and the internet before that, and television and radio before that.

, ,

8 thoughts on “How can you be part of “the next big thing”?

  • Brian Garr says:

    Indeed, we, at LinguaSys, have broken the mold in NLU Tools. Using our huge ontology covering 15 languages, we don’t need SLMs or Embedded grammars. We concentrate on understanding the concepts in an utterance so that we can build a highly structured response, in the way of a SQL statement, or an API call, to satisfy the incoming request. Dialog management is automatic, so you don’t have to anticipate what the user may say. I know it sounds strange, but it is a total break with the past of NLU tools. We believe it is the “next big thing”!

  • Andy Peart says:

    Totally agree with this post. At Artificial Solutions – the company I work for – we’ve already started to see elements of the trends outlined in this post already appearing in the commercial world. For example, our customers are extending the use of online virtual assistants beyond purely customer service applications and are now starting to add intelligent speech-enablement in mobile apps as they look for ways to differentiate and add value to the customer experience they deliver.

    But to really give this emerging trend a boost you need to hand the ability to create sophisticated and intelligent mobile personal assistants to non-technical, non-computational linguists – in the same way that Content Management Systems (CMS) empowered companies to build and maintain their own websites a decade ago.

    And that’s where an intuitive natural language interaction platform such as our own Teneo NLI Platform – made available through an SDK – fits in to this emerging story.

  • The next big thing is absolutely custom virtual assistants. Voice Assist provides simple tools to make developing custom assistants less intimidating and simple to deploy across all mobile phones. SpeechScript is a rapid application development environment that is free to developers to add-on to our safe driving app or to build entirely new apps like our voice to solution.

  • The best way to achieve a well-functioning Specialized Personal Assistant (SPA) is to employ another kind of SPA, that is, “Sequence Package Analysis,” to understand the emotional state of the mobile user. What better way of making personal assistants really “smart” than by equipping them with emotional intelligence? Here at Linguistic Technology Systems we are focused on designing algorithms that can find the subtleties in natural language dialog so that personal assistants perform almost as well as a sensitive and caring live human assistant –perhaps even better considering that even the best human assistants have their bad days. For a better grasp of an SPA-driven speech system, please refer to Chapter 5 in Mobile Speech and Advanced Natural Language Solutions, edited by Amy Neustein and Judith Markowitz (Springer 2013).

  • Todd says:

    Agreed, Meisel is on to something here. It has to get easier for the consumer and users. I think the notion of a tool might even start to go away as the tools begin to appear as simple easy to use apps. For example, at Sensory we use a simple voice trigger wake up concept, like the “Hi Galaxy” on the Samsung phones. We call it TrulyHandsfree Now we are also providing the handset players the ability to embed a simple application that allows the consumer to define their own wake up trigger phrase…it’s too easy to even think of it as a tool, but it adds personalization for the consumer.

  • Google’s Chrome web browser includes support for Goggle’s Web Speech API, which developers can use to integrate speech recognition capabilities into their Web apps, using Google’s cloud-based speech recognition at no cost. The speech recognition is essentially a general speech-to-text technology, which can be used for dictating a message or terms for a web search, for example, or for voice control of an application by interpreting the resulting text appropriately. The speech recognition can be tested by going to using the Chrome browser.

  • as a user of speech recognition for the past 20 years, I am quite excited to see some of these projects described. I suspect however most of them will miss the accessibility needs and the benefits it would give their design efforts. In the early days of NaturallySpeaking, Dragon Systems rarely spoke with us disabled types because we were the Canaries in the coal mine. The product grew better because they listened.

    Listening to the disabled also improves speech products because it forced them to truly live universal design. In my own work with programming using speech recognition, a third philosophy developed which is accessibility is defined by what the user needs, not what the vendor is willing to give.

    Following the above philosophy you would build customer focused projects that will let them customize them according to their workflow or other needs. Systems like Google speech API are clearly first-generation tools that haven’t quite caught up with a universal design philosophy or even really understanding how people use speech recognition once they have immersed themselves in it.

    one last thing to consider the various high-level tools described in comments could have a significant effect. It doesn’t just serve the needs of someone juggling driving in heavy traffic, a too hot cup of coffee, and three memos delivered to them by text message. It could mean the difference between being employable or marginalized for disabled persons.

  • Dave Rich says:

    Meisel’s post targets a complex issue with spot on simplicity. Speech applications have been built on “rocket science” for years and many of the companies involved have sought to keep it that way. They have leveraged their proprietary technologies to either build a services revenue stream, control channels, or create a competitive differentiator.

    At LumenVox, we are seeing pent up demand for simple speech application tools that help de-mystify speech apps and make them less intimidating and costly to deploy. To bring down the cost and complexity of speech deployments, LumenVox today provides grammar and tuning tools and services to enable developers with minimal technical experience to create a great speech user experience. We have much further to go down this path, as do others, to more fully simplify the things that make speech automation difficult. However, in a couple years, the pieces will be in place to meet the demand and unleash the creativity of non-speech technologists to proliferate new man-machine applications and business models.

    Meisel is right to draw a parallel with what it took to unleash the web. The speech industry is rife with unfulfilled promises from the past. The change in trajectory is just beginning with technical simplification, openly available tools, and the lower TCO of speech automation. This is the foundation for delivering on those unfulfilled promises.

Leave a Reply

Your email address will not be published. Required fields are marked *