Best Web Design and Development Services in Gold Coast

We provide best responsive web design services in gold coast. A responsive Web Design uses HTML and CSS to automatically resize, hide, shrink, or enlarge a website to make it look good on all devices…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




How IBM Watson works

Watson was a project developed from 2004 to 2011 by IBM to beat the best humans at the television game show Jeopardy! The project represented the peak of the use of Probabilistic Reasoning as well as one of the last successful systems to use it before Deep Learning became the go-to solution for most of the Machine Learning problems.

Since Deep Blue’s victory over Garry Kasparov in 1997, IBM had been searching for a new challenge. In 2004, Charles Lickel, an IBM Research Manager at the time, identified the project after a dinner he had with co-workers. Lickel noticed that most people in the restaurant were staring at the television in the bar. Jeopardy! was airing. As it turned out, Ken Jennings was playing his 74th match, which turned out to be the last game he won.

The computer that IBM used for IBM Watson to compete in Jeopardy!

In the Deep Blue project, the Chess rules were entirely logical and could be easily reduced to math. The rules for Jeopardy!, however, involved complex behaviors such as language and were much harder to solve. When the project started, the best question-answer (QA) systems could only answer very simple language questions like, “What is the capital of Brazil?” Jeopardy! is a quiz competition where contestants are presented with a clue in the form of an answer, and they must phrase their response as a question. For example, a clue could be: “Terms used in this craft include batting, binding, and block of the month.” The correct response would be “What is quilting?”

IBM had already been working on a QA system called Practical Intelligent Question Answering Technology (Piquant) for six years before Ferrucci started Watson. In a U.S. Government competition, Piquant correctly answered only 35% of the questions and taking minutes to do so. This setup was not even close to what was necessary to win Jeopardy!, and attempts to adapt Piquant failed. So, a new approach to QA was required. Watson was the next attempt.

In 2006, Ferrucci ran initial tests of Watson and compared the results against the current competition. Watson was far below what was necessary to compete live. Not only did Watson only respond correctly 15% of the time, compared to 95% for other programs, but Watson was slower. Watson had to be much better than the best software system at the time to have even the slightest chance to win against the best humans. The next year, IBM staffed a team of 15 and gave a timeframe of three to five years. Ferrucci and his team had much work to do. And, they succeeded. In 2010, Watson was successfully winning against Jeopardy! contestants.

Comparison of precision and percentage of questions answered by the best system before IBM Watson and the top human Jeopardy! Players

What made the game so hard for Watson was that language was really hard for computers at the time. Language is full of intended meaning. An example of such a sentence is “The name of this hat is elementary, my dear contestant.” People can easily detect the wordplay that remembers “elementary, my dear Watson,” a catch-phrase used by Sherlock Holmes, and then remember that the Hollywood version of Sherlock Holmes wears a deerstalker hat. Programming a computer to infer this for all types of questions is hard.

To provide a physical presence in the televised games, Watson was represented by a “glowing blue globe crisscrossed by threads of ‘thought,’ — 42 threads, to be precise,” referencing the significance of the number 42 in the book The Hitchhiker’s Guide to the Galaxy. Watson represented the peak and the end of Probabilistic Reasoning. Let’s go over how Watson worked.

Watson’s main difference from other systems was its speed and memory. Watson had stored in its memory millions of documents, including books, dictionaries, and encyclopedias as well as news articles. The data was collected either online from sources like Wikipedia or offline. The algorithm employed different techniques that together made Watson win the competition. The following are a few of these techniques.

Learning from Reading

First, Watson “read” vast amounts of text. It looked at the text semantically and syntactically, meaning that it tried to tear sentences apart to understand it. For example, it identified the location of the subject, verb, and object and produced a graph of the sentences, Syntactic Frames. Again, A.I. used learning techniques much like humans. In this case, how an elementary student learns the basics of grammar.

Then, Watson correlated and calculated confidence scores for each sentence based on how many times and in what source the information was found. For example, in the sentence: “Inventors invent patents.” Watson identified “Inventors” as the subject of the sentence, “invent” as the verb, and “patents” as the object. The entire sentence has a confidence score of 0.8 because Watson found it in a few of the relevant sources. Another example is the sentence “People earn degrees at schools,” which has a confidence score of 0.9. A Semantic Frame is a sentence and a score, and each one also has the associated information of what each word is syntactical.

How learning from reading works

Figure 8.6 shows how the process of learning from reading. First, the text is parsed and turned into Syntactic Frames. Then, through generalization and statistical aggregation, they are turned into Semantic Frames.

Most of the algorithms in Watson were not novel techniques. For example, for the clue: “He was presidentially pardoned on September 8, 1974,” the algorithm found that this sentence was looking for the subject. It then searched for the possible subjects that are found in Semantic Frames with similar words in them. Based on the syntactical breakdown done in the first step, it generated a set of possible answers. If one of the possible answers that it came up was “Nixon,” that would be considered a candidate answer. Next, Watson played a clever trick replacing the word “He” with “Nixon,” forming the new sentence “Nixon was presidentially pardoned on September 8, 1974.”

Then, it ran a new search on the generated Semantic Frame, checking to see if it was the correct answer. The search found the Semantic Frame “Ford pardoned Nixon on September 8, 1974” with a high score. That was a very similar phrase, giving the answer a high score because the sentences were similar and the Semantic Frame had a high score. But searching and getting a confidence score was not the only technique applied by Watson.

Evaluating Hypothesis

Evaluating Hypothesis was another clever technique that Watson employed to help evaluate its answers. With the clue: “In cell division, mitosis splits the nucleus and cytokinesis splits this liquid cushioning the nucleus,” Watson searched for possible answers in its knowledge base that it acquired through reading. In this case, it found many candidate answers:

Systematically, it tested the possible answers by creating an Intermediate Hypothesis, checking if the solutions fit the criterion of being liquid. It calculated the confidence of each one of the solutions being liquid using its Semantic Frames and the same search mechanism described above. The results had the following percentages:

To generate these confidence scores, it searched through its knowledge base and, for example, found the Semantic Frame:

Cytoplasm is a fluid surrounding the nucleus.

It, then, checked to see if fluid was a type of liquid. To answer that, it looked at different resources, one of them is WordNet, but it did not find evidence that shows that fluid is a liquid. But through its knowledge base, it learned that sometimes people consider fluid a liquid. Then, it checked if Cytoplasm was a possible response, and it did not eliminate it from the possible answer set. With all that information, it created a possible answer set, with each answer having its own probability assigned to it. This step helped Watson update its confidence score for each answer in its answer set.

Cross-Checking Space and Time

Another technique Watson employed was to cross-check against time or space, checking to see which answers could be eliminated or changing the probability of a response being correct.

For example, for the clue: “In 1594, he took the job as a tax collector in Andalusia.” The two top answers generated by the first pass of the algorithm were “Thoreau” and “Cervantes.” When Watson analyzed “Thoreau” as a possible answer, it discovered that “Thoreau” was born in 1817, and at that point, Watson ruled that answer out because he was not alive in 1594.

Learning through Experience

Jeopardy!’s questions are based in categories, meaning limiting the scope of knowledge. Watson used that information and adjusted its answer confidence. For example, in the category Celebrations of the Month: The first clue was “National Philanthropy Day and All Souls’ Day.” Based on its algorithm, Watson’s answer would be “Day of the Dead” because it classifies this category of the type “Day,” but the correct response was November. Because of that, Watson updated the category type to be a mix of “Day” and “Month,” which boosted answers that are of type “Month.” With time, Watson could update the type of response for a certain category.

IBM Watson updates the category type when its responses do not reflect the type of response for the correct answer. Then, it updates the possible category type based on the correct answers.

First Match

This image shows the evolution of different versions of IBM Watson throughout its different versions

The first broadcasted match happened a month later on February 14, 2011, and the second match the next day. Watson won the first match, but it made a huge mistake. In the final round of the first match, Watson’s response in the U.S. Cities category was “What is Toronto??????” Alex Trebek, the host of Jeopardy! and a Canadian native, made fun of Watson, jokingly saying that he learned that Toronto was an American City.

David Ferrucci, the leading scientist, explained that Watson did not deal with structured databases, so it used U.S. City as a clue to what the possible answer could include and that many American cities are named Toronto. Also, the Canadian baseball team, the Toronto Blue Jays, plays in the American Baseball League. That could be the reason why Watson considered Toronto to be one of the possible answers. Ferrucci also stated that very often answers in Jeopardy! are not the types of things that are named in that category. Watson knew that, and so the category “U.S. Cities” might be a clue to the answer. Watson used other elements to contribute to its response. The engineers also stated that its confidence was very low. The number of question marks after Watson’s answer indicated this. Watson had a 14% confidence percentage for “What is Toronto??????” And, the correct answer “What is Chicago?” was a close second with an 11% confidence percentage. At the end of the first match, however, Watson had more than triple the money than the second best competitor. Watson won with $35,734, Rutter with $10,400, and Jennings with $4,800.

David Ferrucci, the man behind Watson (left), and Watson playing Jeopardy! (right)

Second Match

On the second day of the competition to support Watson, one of the engineers wore a Toronto Blue Jays jacket. The game started, and Jennings chose the Daily Double clue. Watson responded incorrectly to the clue for the Daily Double for the first time. After the first round, Watson placed second for the first time in the competition. But in the end, Watson won the second match with $77,147 with Jennings in second place with $24,000. IBM Watson was the first machine to win Jeopardy! against the best humans, and it made history.

Add a comment

Related posts:

Five Year Commitment

It is becoming obvious to me that they do not like Americans here very much. I can tell by the way they look at me, and also because I am not making many friends. The food is horrible, the heat is…

Is Cloud Wallet 2.0 Investing Worth?

Living in a fast-paced society where more people begin to rely on convenience when it comes to performing transactions, there has been a rapid increase in the use and development of digital wallets…

JMS 215 Post 6

I think the whole Huxley story is just absurd. I remember seeing tons of posts on my own Instagram from other influencers that I follow and remember being curious about what the hashtag was that kept…