Assignment No.
Title : Assignment based on Technologies from the Knowledge Navigator video
Problem Definition: List five technologies from the Knowledge Navigator video that were not
around in 1987, but are in widespread use today.
Requirements: Knowledge of human computer interface
Learning Objectives: Learn the technologies from the Knowledge Navigator video
Outcomes: After completion of this assignment students will be able to learn various
technologies from the knowledge navigator video.
Theory Concepts:
The Knowledge Navigator is a concept described by former Apple Computer CEO John Sculley
in his 1987 book, Odyssey: Pepsi to Apple. It describes a device that can access a large networked
database of hypertext information, and use software agents to assist searching for information.
Videos Apple produced several concept videos showcasing the idea. All of them featured a tablet
style computer with numerous advanced capabilities, including an excellent text-to-speech
system with no hint of "computerese", a gesture based interface resembling the multi-touch
interface later used on the iPhone and an equally powerful speech understanding system,
allowing the user to converse with the system via an animated "butler" as the software agent.
In one vignette a university professor returns home and turns on his computer, in the form of a
tablet the size of a large-format book. The agent is a bow-tie wearing butler who appears on the
screen and informs him that he has several calls waiting. He ignores most of these, from his
mother, and instead uses the system to compile data for a talk on deforestation in the Amazon
Rainforest. While he is doing this, the computer informs him that a colleague is calling, and they
then exchange data through their machines while holding a video based conversation.
In another such video, a young student uses a smaller handheld version of the system to prompt
him while he gives a class presentation on volcanoes, eventually sending a video of an exploding
volcano to the video "blackboard". In a final installment a user scans in a newspaper by placing
it on the screen of the full-sized version, and then has it help him learn to read by listening to
him read the scanned results, and prompting when he pauses.
Credits
The videos were funded and sponsored by Bud Colligan, Director of Apple's higher education
marketing group, written and creatively developed by Hugh Dubberly and Doris Mitsch of Apple
Creative Services, with technical and conceptual input from Mike Liebhold of Apple's Advanced
Technologies Group and advice from Alan Kay, then an Apple Fellow.The videos were produced
by The Kenwood Group in San Francisco and directed by Randy Field. The director of
photography was Bill Zarchy. The post-production mix was done by Gary Clayton at Russian Hill
Recording for The Kenwood Group. The product industrial design was created by Gavin Ivester
and Adam Grosser of Apple design.
Samir Arora, a software engineer at Apple was involved in R&D on application navigation and
what was then called hypermedia. He wrote an important white paper entitled “Information
Navigation: The Future of Computing". While working for Apple CEO John Sculley at the time,
Arora built the technology to show fluid access to linked data displayed in a friendly manner, an
emerging area of research at Apple. The Knowledge Navigator video premiered in 1987 at
Educom, the leading higher education conference, in a keynote by John Sculley, with demos of
multimedia, hypertext and interactive learning directed by Bud Colligan. The music featured in
this video is Georg Anton Benda 's Harpsichord Concerto in C.
Reception
The astute bow tie wearing software agent in the video has been the center of quite a few heated
discussions in the domain of human–computer interaction. It was criticized as being an
unrealistic portrayal of the capacities of any software agent in the foreseeable future, or even in a
distant future.[citation needed] Some user interface professionals like Ben Shneiderman of the
University of Maryland, College Park have also criticized its use of a human likeness for giving a
misleading idea of the nature of any interaction with a computer, present or future. Some
visions put forth by proponents of the Semantic Web have been likened to that of the Knowledge
Navigator by Marshall and Shipman, who argue that some of these visions "ignore the difficulty
of scaling knowledge-based systems to reason across domains, like Apple's Knowledge
Navigator," and conclude that, as of 2003, "scenarios of the complexity of [a previously quoted]
Knowledge Navigator-like approach to interacting with people and things in the world seem
unlikely."
Siri
The notion of Siri was firmly planted at Apple 25 years ago though “Knowledge Navigator” with
the voice of the assistant was only a concept prototype. In one of the videos, a man is seen asking
the assistant to search for an article published 5 years before his time, the assistant finds it and
tells the article being dated to 2006, and due to this we can conclude that the video is set to take
place in September 2011. In October 2011, Apple relaunched Siri, a voice activated personal
assistant software vaguely similar to that aspect of the Knowledge Navigator just a month after
their initial prediction.
The Making of Knowledge Navigator : Apple made the Knowledge Navigator video for a
keynote speech that John Sculley gave at Educom (the premier college computer tradeshow and
an important event in a large market for Apple). Bud Colligan who was then running
higher-education marketing at Apple asked us to meet with John about the speech. John
explained he would show a couple examples of student projects using commercially available
software simulation packages and a couple university research projects Apple was funding. He
wanted three steps:
1. what students applere doing now
2. research that would soon move out of labs, and
3. a picture of the future of computing.
He asked us to suggest some ideas. apple suggested a couple approaches including a short
“science-fiction video.” John choose the video. Working with Mike Liebhold (a researcher in
Apple’s Advanced Technologies Group) and Bud, apple came up with a list of key technologies to
illustrate in the video, e.g., networked collaboration and shared simulations, intelligent agents,
integrated multimedia and hypertext. John then highlighted these technologies in his speech.
apple had about 6 weeks to write, shoot, and edit the video—and a budget of about $60,000 for
production. apple began with as much research as apple could do in a few days. apple talked
with Aaron Marcus and Paul Saffo. Stewart Brand’s book on the “Media Lab” was also a
source—as well as earlier visits to the Architecture Machine Group. apple also read William
Gibson’s “Neuromancer” and Verber Vinge’s “True Names.” At Apple, Alan Kay, who was then
an Apple Fellow, provided advice. Most of the technical and conceptual input came from Mike
Liebhold. apple collaborated with Gavin Ivester in Apple’s Product Design Group who designed
the “device” and had a wooden model built in little more than a week. Doris Mitch who worked
in my group wrote the script. Randy Field directed the video, and the Kenwood Group handled
production.
The project had three management approval steps:
1. the concept of the science fiction video,
2. the key technology list, and
3. the script.
It moved quickly from script to shooting without a full storyboard—largely because apple didn’t
have time to make one. The only roughs were a few Polaroid snapshots of the location, two
sketches showing camera position and movement, and a few sketches of the screen. apple
showed up on location very early and shot for more than 12 hours. (Completing the shoot within
one day was necessary to stay within budget.) The computer screens were developed over a few
days on a video paint box. (This was before Photoshop.)
The video form suggested the talking agent as a way to advance the “story” and explain what the
professor was doing. Without the talking agent, the professor would be silent and pointing
mysteriously at a screen. Apple thought people would immediately understand that the piece
was science fiction because the computer agent converses with the professor—something that
only happened in Star Trek or Star Wars.
What is surprising is that the piece took on a life of its own. It spawned half a dozen or more
sequels within Apple, and several other companies made similar pieces. These pieces were
marketing materials. They supported the sale of computers by suggesting that a company
making them has a plan for the future. They were not inventing new interface ideas. (The
production cycles didn’t allow for that.) Instead, they were about visualizing existing ideas—and
pulling many of them together into a reasonably coherent environment and scenario of use. A
short while into the process of making these videos, Alan Kay said, “The main question here is
not is this technology possible but is this the way Apple wants to use technology?” One effect of
the video was engendering a discussion (both inside Apple and outside) about what computers
should be like. On another level, the videos became a sort of management tool. They suggested
that Apple had a vision of the future, and they prompted a popular internal myth that the
company was “inventing the future.”
Technologies
Apple's Siri
Siri is Apple's personal assistant for iOS, macOS, tvOS and watchOS devices that uses voice
recognition and is powered by artificial intelligence (AI). Siri responds to users' spoken
questions by speaking back to them through the device's speaker and presenting relevant
information on the home screen from certain apps, such as Web Search or Calendar. The service
also lets users dictate emails and text messages, reads received emails and messages and
performs a variety of other tasks.
Voice of Siri
ScanSoft, a software company that merged with Nuance Communications in 2005, hired
voiceover artist Susan Bennett that same year when the scheduled artist was absent. Bennett
recorded four hours of her voice each day for a month in a home recording studio, and the
sentences and phrases were linked together to create Siri's voice. Until a friend emailed her in
2011, Bennett wasn't aware that she had become the voice of Siri. Although Apple never
acknowledged that Bennett was the original Siri voice, audio experts at CNN confirmed it. Karen
Jacobsen, a voiceover artist known for her work on GPS systems, provided the original
Australian female voice. Jon Biggs, a former tech journalist, provided Siri's British male voice.
Apple developed a new female voice for iOS 11 with deep learning technology by recording hours
of speech from hundreds of candidates.
Important features and popular commands
Siri can perform a variety of tasks, such as:
Navigate directions, including "What's the traffic like on my way to work?"
   ●   Schedule events and reminders, such as "Text Ben 'happy birthday' at midnight on
       Tuesday"
   ●   Search the web, including "Find images of dogs"
   ●   Relay information such as "How long does it take water to boil?"
   ●   Change settings such as "Increase the screen brightness" or "Take a photo" Users can
       make requests to Siri in natural language.
Amazon's Alexa
Amazon Alexa, also known simply as Alexa, is a virtual assistant technology largely based on a
Polish speech synthesizer named Ivona, bought by Amazon in 2013. It was first used in the
Amazon Echo smart speaker and the Echo Dot , Echo Studio and Amazon Tap speakers
developed by Amazon Lab126. It is capable of voice interaction, music playback, making to-do
lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic,
sports, and other real-time information, such as news. Alexa can also control several smart
devices using itself as a home automation system. Users are able to extend the Alexa capabilities
by installing "skills" (additional functionality developed by third-party vendors, in other settings
more commonly called apps) such as weather programs and audio features. It uses automatic
speech recognition, natural language processing, and other forms of weak AI to perform these
tasks. Most devices with Alexa allow users to activate the device using a wake-word (such as
Alexa or Amazon); other devices (such as the Amazon mobile app on iOS or Android and
Amazon Dash Wand) require the user to click a button to activate Alexa's listening mode,
although, some phones also allow a user to say a command, such as "Alexa" or "Alexa wake".
Functions
Alexa can perform a number of preset functions out-of-the-box such as set timers, share the
current weather, create lists, access Wikipedia articles, and many more things. Users say a
designated "wake word" (the default is simply "Alexa") to alert an Alexa-enabled device of an
ensuing function command. Alexa listens for the command and performs the appropriate
function, or skill, to answer a question or command. When questions are asked, Alexa converts
sound waves into text which allows it to gather information from various sources. Behind the
scenes, the data gathered is then sometimes passed to a variety of sources including
WolframAlpha, iMDB, AccuWeather, Yelp, Wikipedia, and others to generate suitable and
accurate answers. Alexa-supported devices can stream music from the owner's Amazon Music
accounts and have built-in support for Pandora and Spotify accounts. Alexa can play music from
streaming services such as Apple Music and Google Play Music from a phone or tablet. In
addition to performing pre-set functions, Alexa can also perform additional functions through
third-party skills that users can enable. Some of the most popular Alexa skills in 2018 included
"Question of the Day" and "National Geographic Geo Quiz" for trivia; "TuneIn Live" to listen to
live sporting events and news stations; "Big Sky" for hyper local weather updates; "Sleep and
Relaxation Sounds" for listening to calming sounds; "Sesame Street" for children's
entertainment; and "Fitbit" for Fitbit users who want to check in on their health stats In 2019,
Apple, Google, Amazon, and Zigbee Alliance announced a partnership to make their smart home
products work together.
Microsoft's Cortana
Cortana is a virtual assistant developed by Microsoft that uses the Bing search engine to perform
tasks such as setting reminders and answering questions for the user. Cortana is currently
available in English, Portuguese, French, German, Italian, Spanish, Chinese, and Japanese
language editions, depending on the software platform and region in which it is used. Microsoft
began reducing the prevalence of Cortana and converting it from an assistant into different
software integrations in 2019. It was split from the Windows 10 search bar in April 2019.In
January 2020, the Cortana mobile app was removed from certain markets, and on March 31,
2021, the Cortana mobile app was shut down globally.
Functionality
Cortana can set reminders, recognize natural voice without the requirement for keyboard input,
and answer questions using information from the Bing search engine (For example, current
weather and traffic conditions, sports scores, biographies).Searches using Windows 10 are made
only with the Microsoft Bing search engine, and all links will open with Microsoft Edge, except
when a screen reader such as Narrator is being used, where the links will open in Internet
Explorer. Windows Phone 8.1's universal Bing SmartSearch features are incorporated into
Cortana, which replaces the previous Bing Search app, which was activated when a user presses
the "Search" button on their device. Cortana includes a music recognition service. Cortana can
simulate rolling dice and flipping a coin. Cortana's "Concert Watch" monitors Bing searches to
determine the bands or musicians that interest the user. It integrates with the Microsoft Band
watch band for Windows Phone devices if connected via Bluetooth, it can make reminders and
phone notifications.
Since the Lumia Denim mobile phone series, launched in October 2014, active listening was
added to Cortana enabling it to be invoked with the phrase: "Hey Cortana". It can then be
controlled as usual.Some devices from the United Kingdom by O2 received the Lumia Denim
update without the feature, but this was later clarified as a bug and Microsoft has since fixed it.
Cortana integrates with services such as Foursquare to provide restaurant and local attraction
recommendations and LIFX to control smart light bulbs.
Google's Assistant
Google Assistant is a virtual assistant software application developed by Google that is primarily
available on mobile and home automation devices. Based on artificial intelligence, Google
Assistant can engage in two-way conversations, unlike the company's previous virtual assistant,
Google Now.
Google Assistant debuted in May 2016 as part of Google's messaging app Allo, and its
voice-activated speaker Google Home. After a period of exclusivity on the Pixel and Pixel XL
smartphones, it was deployed on other Android devices starting in February 2017, including
third-party smartphones and Android Wear (now Wear OS), and was released as a standalone
app on the iOS operating system in May 2017. Alongside the announcement of a software
development kit in April 2017, Assistant has been further extended to support a large variety of
devices, including cars and third-party smart home appliances. The functionality of the Assistant
can also be enhanced by third-party developers.
Users primarily interact with the Google Assistant through natural voice, though keyboard input
is also supported. Assistant is able to answer questions, schedule events and alarms, adjust
hardware settings on the user's device, show information from the user's Google account, play
games, and more. Google has also announced that Assistant will be able to identify objects and
gather visual information through the device's camera, and support purchasing products and
sending money. At CES 2018, the first Assistant-powered smart displays (smart speakers with
video screens) were announced, with the first one being released in July 2018. In 2020, Google
Assistant will be available on more than 1 billion devices. Google Assistant is available in more
than 90 countries and in over 30 languages, and is used by more than 500 million users monthly
Samsung’s bixby.
Bixby is a virtual assistant developed by Samsung Electronics. It represents a major reboot for S
Voice, Samsung's voice assistant app introduced in 2012 with the Galaxy S III S Voice was later
discontinued on 1 June 2020. In May 2017, Samsung announced that Bixby would be coming to
its line of Family Hub 2.0 refrigerators, making it the first non-mobile product to include the
virtual assistant.
Samsung’s Bixby digital assistant lets you control your smartphone and select connected
accessories. You can open apps, check the weather, play music, toggle Bluetooth, and much
more. You’ll find everything you need to know about the Google rival below, including how to
access it, the features it offers, and which devices it’s available on.
The most interesting and helpful component is of course Bixby Voice, which lets you use voice
commands to get stuff done. It works with all Samsung apps and a few third-party apps,
including Instagram, Gmail, Facebook, and YouTube. With Voice you can send text messages,
check sports scores, turn down screen brightness, check your calendar, launch apps, and more.
The tech can also read out your latest incoming messages, and flip between male and female
versions. Like Google Assistant, Bixby can handle some more complicated two-step commands,
such as creating an album with your vacation photos and sharing it with a friend.
Conclusion: Successfully studied technologies from the Knowledge Navigator video that were
not around in 1987, but are in widespread use today.