1. Introduction

Artificial intelligence (AI) can be argued to be as old (if not older) as computer science itself. Indeed, the question of intelligent machines was one of the motivations of Alan Turing (1950) for stating the principles of the Turing Machine, which remains until this day the abstract model of computers. Building intelligent systems out of computers has hence been a continuous challenge for many computer scientists and developers. Among different paths to that goal, one that has been largely studied involves the explicit representation of knowledge, and the processing of those representations by generic reasoning engines[1] (Schreiber et al. 2000).

The advent of the Web, and then of mobile computing, has however dramatically changed the way we use computers, and with it our expectations of what such intelligent systems should be. Of course, it has also changed the means available to build them. One could argue that the Web changes the fundamental assumptions on which AI was traditionally built, raising a number of new challenges, but also providing new opportunities. The goal of this dissertation is to show how, in my work in the last ten years, I have been aiming at novel approaches to knowledge engineering, intending to tackle those challenges and leverage those opportunities.

1.1. Dynamics and ambivalence

Since the first attempts to build knowledge-based computer systems, the process of acquiring and formalizing knowledge has been recognized as a major bottleneck in the building of such systems. With expert systems, knowledge was first acquired by knowledge engineers through interviews with domain experts. Such interviews are very time consuming, as the parties have different skills (respectively in formal models and in the application domain) and must learn enough from each other in order to reach an agreement on how to represent the experts’ knowledge. Furthermore, a large part of the expert’s knowledge is tacit (Nonaka and Takeuchi 1995) and eliciting it can be challenging. Finally, experts can be reluctant to disclose their knowledge if they have the feeling that the system is meant to replace them.

In order to tackle those difficulties, alternative approaches have been proposed, such as applying natural language processing (NLP) to a corpus of texts related to the application domain (Delannoy et al. 1993; Mooney and Bunescu 2005). The goal is to automatically or semi-automatically discover the domain terminology, and extract relevant knowledge in the desired formalism (rules, description logics, etc.). While less time-consuming than interviewing experts, such approaches produce knowledge representations that, most of the time, still require human validation. This is related to the fact that traditional AI is strongly rooted in formal logic, which leaves no room for approximate or relative truths[2]. Hence, whatever piece of knowledge that could be collected had to be scrutinized for validity and consistency. Not only does this make the building of the knowledge base costly, but also any evolution that this knowledge base might undergo. Knowledge-based AI has therefore mostly developed on the premise that knowledge was rare, and as such should be made as stable as possible, despite a few efforts to temper this tendency by applying agile methodologies (Auer and Herre 2007; Canals et al. 2013),

Another popular alternative approach is case-based reasoning (CBR), founded on a memory-model proposed by Schank (1982) and formalized by Aamodt and Plaza (1994). Schank points out that many reasoning tasks are not performed from first principles, but instead by reproducing or adapting the solution of a past similar problem. In CBR, problem solving knowledge is then captured by a set of cases, that are examples of previous problems with their solution. Reasoning is achieved by comparing the problem at hand with the ones previously solved, retrieved from the case base, and adapting the solution of the closest case. If successful, that adapted solution is in turn recorded as a new case[3]. One benefit of CBR over other approaches is that it does not require domain knowledge to be fully formalized. Instead, a representative set of prototypical cases is enough to get it started. Those may be significantly easier to acquire than more general knowledge, as they are not expected to hold a general or universal truth, but only a local solution in the context of the given problem. Another benefit is that a CBR system learns new cases as it is being used, which makes it able to improve over time.

Still, those benefits must not hide inherent difficulties. While the case base may be relatively easy to gather, it is only one of the knowledge containers required by the CBR system (Richter 2003; Cordier 2008). Other knowledge containers include: the structural representation of cases, the similarity measure used to compare cases to the problem at hand, and the adaptation knowledge used to adapt its solution. Furthermore, an ever-growing case base requires continuous maintenance (Lopez De Mantaras et al. 2005, section 5; Cummins and Bridge 2011) in order to prevent pollution (from bad quality cases) or bloating (from an excessive quantity of cases). Maintenance becomes even more challenging when changes occur in the context in which the CBR system operates, and old cases become obsolete. Replacing obsolete cases by new ones is often not enough to take changes into account: all the knowledge containers mentioned above may have to evolve as well. The very structure of the cases (a problem and its solution) may have to be revised as the nature of the problem, or its understanding, may change over time. This of course may have a huge impact on the whole system, as all knowledge containers are strongly dependent on that structure.

A large part of our work has been inspired by CBR, and trying to leverage the problems faced by any knowledge-based system when its context changes. Indeed, is not adaptability a core aspect of intelligence? But evolving implies the continuous acquisition of knowledge. In other words, adaptive reasoning mechanisms must take into account, from the ground up, the dynamics of their knowledge base. It does not just mean being able to integrate new knowledge as it is being acquired; it does not even limit to being able to revoke old knowledge obsoleted by new information. It requires to embrace the fact that information is inherently ambivalent, that it acquires meaning (and hence becomes knowledge) only in the context of a particular problem or task[4]. Note that ambivalence must not be confused with ambiguity; effective ambivalence means that enough contextual information is available to disambiguate knowledge, to decide on a particular interpretation.

This is where the Web offers an unprecedented opportunity, as it has become the hub of most of our digital activities, even more, a large part of our personal and social lives. Its network structure naturally relates our actions to a vast amount of information, either text material (Wikipedia[5], various blogs or forums), structured databases (such as Freebase[6] or Musicrainz[7]), or interactive services (weather forecast, route planning, etc.). Of course, this is not new, and major Web companies have a long (and controversial[8]) history of studying and using this wealth of information, to gain deeper knowledge about their users and provide more targeted services. Their approach is however mostly one-way: users have few (if any) insight or control over the information that services have about them, or how it is used. And even if they did, that information is buried into machine-generated statistical models, that produce results mostly in a black-box fashion.

In contrast to this service-centered approach, we have been pursuing a user-centered approach, much in the line of Hsieh et al. (2013), where data collection and reasoning processes are as transparent as possible. It is important that every result can be explained and traced back to its premises, and that the interpretation choices about ambivalent information can be elicited. Indeed, the ultimate judge of the relevance of a reasoning process is the human on behalf of whom the system is working. It is therefore important that users have all the means to understand the results of the system, that they can comment on them, and that their feedback be collected as additional knowledge for future reasoning tasks. That way, meaning is not a pre-defined property of information, but negotiated and co-constructed with users. More generally, user feedback does not have to be explicit or direct: any interaction of the users with the system can be considered as a clue and participate to this negotiation.

1.2. Structure of the dissertation

The rest of this dissertation is structured as follows.

In Chapter 2, I will first present our work on building knowledge-based systems exploiting a special kind of knowledge, namely activity traces. More precisely, those systems keep track of how users have interacted with them in the past, in order to gather experience and, from this, to learn both about the users and from them. By capturing the inherent complexitiy of the user’s task, this kind of knowledge allows for multiple interpretations, and hence requires a special kind of reasoning as well. I will describe the theoretical framework that we have proposed to build such trace-based systems, as well as the generic implementation of that framework which we have used in order to validate our proposals in various contexts.

One particular domain where experiential knowledge can prove useful is doubtlessly user assistance. In Chapter 3, I will present a number of our works focusing on that topic. The most straightforward use of traces to help users is simply to present them with their traces, in order to help them remember their activity or explain it to others. Of course, one’s activity is not always a flawless and straight path, and it may be useful to detect failures and errors in the collected traces in order to make them more useful. Finally, observing the user’s interactions with the system, and detecting unsuccessful or abnormal patterns, can help detect problems and make helpful proposals.

Chapter 4 will then describe our activity related to Web technologies and Web standards. The Web was initially designed as a document space, and documents are the traditional means for humans to represent their knowledge. Thus, digital documents, such as those used on the Web, can be designed in such a way that the knowledge they represent be equally usable by humans and machines. This is the starting idea that lead to the concept of the Semantic Web (Berners-Lee et al. 2001; Shadbolt, Hall, and Berners-Lee 2006). I will show how the REST architectural style, which is one of the foundations of the Web, accommodates and even encourages ambivalent information. As such, it allows to bridge the gap between documents, data and knowledge representations.

In Chapter 5, I will focus on a specific class of documents, namely hypervideos. Indeed, while hypertext can build on centuries of practice around textual documents[9], video as a document form is hardly older than hypervideo itself, and lacks well established usages when it comes to active reading or annotation. Documentary structures for hypervideos must therefore be flexible enough to allow the emergence of new usages. Here again, ambivalence is a key to this flexibility. We have proposed models and tools to represent and process hypervideo, centered on the concept of annotation. Interestingly, video annotations share a lot of commonalities with the activity traces presented in Chapter 2: they both relate to something that is hard to grasp by computers (resp. the multimedia signal and the user’s activity), which has an inherent temporal dimension to which both annotations and traces are anchored. Furthermore, video annotations are used, among other scenarios, to manually build traces of a recorded activity.

Finally, in the last chapter, to synthesize all the presented works, I will propose the groundwork of a theoretical framework for knowledge representation, aimed to cope with and account for multiple interpretations. In other words, it is an attempt to formalize ambivalent information and the dynamic reasoning processes that use them.

Notes

[1]In contrast, machine learning approaches aim at producing models from instance data. While those models can be used to make predictions or decisions, they arguably capture some knowledge about the domain. But they are usually very hard to interpret by humans, hence do not qualify as explicit knowledge representations.
[2]Alternative formalisms, such as modal logics (Chellas 1980) or fuzzy logics (Zadeh 1965) have been proposed, but with no real breakthrough in knowledge engineering.
[3]Actually, even a failed adaptation can be recorded in the case base, in order to prevent the system from making the same mistake again.
[4]CBR does not really meet this requirement, as all the knowledge containers, especially the structure of the cases themselves, are usually designed for a predefined class of problem.
[5]http://www.wikipedia.org/
[6]http://freebase.com/
[7]http://musicbrainz.org/
[8]See for example the controversy, reported by Arthur (2014), raised by Facebook’s study (Kramer 2012) on users’ emotions. More recently, Google’s study on user’s security questions (Bonneau et al. 2015) has also raised a few eyebrows.
[9]Note that this can be both an advantage and a hindrance, as old habits may impede the emergence of new practices.

Chapter bibliography

Aamodt, Agnar, and Enric Plaza. 1994. “Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches.” AI Communications 7 (1): 39–59.
Arthur, Charles. 2014. “Facebook Emotion Study Breached Ethical Guidelines, Researchers Say.” The Guardian. June 30, 2014. http://www.theguardian.com/technology/2014/jun/30/facebook-emotion-study-breached-ethical-guidelines-researchers-say.
Auer, Sören, and Heinrich Herre. 2007. “RapidOWL — An Agile Knowledge Engineering Methodology.” In Perspectives of Systems Informatics, edited by Irina Virbitskaite and Andrei Voronkov, 4378:424–30. LNCS. Springer. http://www.springerlink.com.gate6.inist.fr/content/5l6r104080127u34/abstract/.
Berners-Lee, Tim, James Hendler, Ora Lassila, and others. 2001. “The Semantic Web.” Scientific American 284 (5): 28–37.
Bonneau, Joseph, Elie Bursztein, Ilan Caron, Rob Jackson, and Mike Williamson. 2015. “Secrets, Lies, and Account Recovery: Lessons from the Use of Personal Knowledge Questions at Google.” In 24th International Conference on World Wide Web, 141–150. Florence, Italy: ACM. http://dl.acm.org/citation.cfm?id=2736277.2741691.
Canals, Gérôme, Amélie Cordier, Emmanuel Desmontils, Laura Infante-Blanco, and Emmanuel Nauer. 2013. “Collaborative Knowledge Acquisition under Control of a Non-Regression Test System.” In Workshop on Semantic Web Collaborative Spaces. Montpellier, France. https://hal.archives-ouvertes.fr/hal-00880347.
Chellas, Brian F. 1980. Modal Logic: An Introduction. Cambridge [Eng.] ; New York: Cambridge University Press.
Cordier, Amélie. 2008. “Interactive and Opportunistic Knowledge Acquisition in Case-Based Reasoning.” Thèse de Doctorat en Informatique, Université Lyon 1. http://liris.cnrs.fr/publis/?id=3776.
Cummins, Lisa, and Derek Bridge. 2011. “Choosing a Case Base Maintenance Algorithm Using a Meta-Case Base.” In Research and Development in Intelligent Systems XXVIII, 167–180. Springer. http://link.springer.com/chapter/10.1007/978-1-4471-2318-7_12.
Delannoy, J. F., C. Feng, S. Matwin, and S. Szpakowicz. 1993. “Knowledge Extraction from Text: Machine Learning for Text-to-Rule Translation.” In Proceedings of European Conference on Machine Learning Workshop on Machine Learning and Text Analysis, 7–13. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.35.8376&rep=rep1&type=pdf.
Hsieh, Cheng-Kang, Hongsuda Tangmunarunkit, Faisal Alquaddoomi, John Jenkins, Jinha Kang, Cameron Ketcham, Brent Longstaff, et al. 2013. “Lifestreams: A Modular Sense-Making Toolset for Identifying Important Patterns from Everyday Life.” In Proceedings of the 11th ACM Conference on Embedded Networked Sensor Systems, 5:1–5:13. SenSys ’13. New York, NY, USA: ACM. https://doi.org/10.1145/2517351.2517368.
Kramer, Adam D.I. 2012. “The Spread of Emotion via Facebook.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 767–770. CHI ’12. New York, NY, USA: ACM. https://doi.org/10.1145/2207676.2207787.
Lopez De Mantaras, Ramon, David Mcsherry, Derek Bridge, David Leake, Barry Smyth, Susan Craw, Boi Faltings, et al. 2005. “Retrieval, Reuse, Revision and Retention in Case-Based Reasoning.” The Knowledge Engineering Review 20 (03): 215–240. https://doi.org/10.1017/S0269888906000646.
Mooney, Raymond J., and Razvan Bunescu. 2005. “Mining Knowledge from Text Using Information Extraction.” SIGKDD Explor. Newsl. 7 (1): 3–10. https://doi.org/10.1145/1089815.1089817.
Nonaka, Ikujiro, and Hirotaka Takeuchi. 1995. The Knowledge Creating Company. Oxford University Press, Oxford (GB).
Richter, Michael M. 2003. “Knowledge Containers.” Readings in Case-Based Reasoning. http://pages.cpsc.ucalgary.ca/~mrichter/Papers/Knowledge%20Containers.pdf.
Schank, R.C. 1982. Dynamic Memory: A Theory of Reminding and Learning in Computers and People. Vol. 240. Cambridge University Press Cambridge.
Schreiber, Guus, Hans Akkermans, Anjo Anjevierden, Robert de Hoog, Nigel Shadbolt, Walter Van de Velde, and Bob Wielinga. 2000. Knowledge Engineering and Management: The CommonKADS Methodology. MIT Press.
Shadbolt, Nigel, Wendy Hall, and Tim Berners-Lee. 2006. “The Semantic Web Revisited.” Intelligent Systems, IEEE 21 (3): 96–101.
Turing, Alan M. 1950. “Computing Machinery and Intelligence.” Mind, 433–460.
Zadeh, L.A. 1965. “Fuzzy Sets.” Information and Control 8 (3): 338–53. https://doi.org/10.1016/S0019-9958(65)90241-X.