Skip to main content Skip to main navigation

Project | LASAD

Duration:
Learning to Argue: Generalized Support Across Domains

Learning to Argue: Generalized Support Across Domains

While argumentation skills are critical for humans to have and to learn, for instance, in dealing with personal issues, such as communicating with colleagues, or as an integral component of some professions, such as the law or science, they are difficult to teach. Mostly, argumentation skills are picked up indirectly and informally through classroom discussion and debate – and perhaps through some guidance from a teacher. However, such an approach is not efficient and does not scale up very well: there are simply not enough teachers, nor do those teachers have enough time, to instruct all students on argumentation.

Intelligent Tutoring Systems (ITSs) have been developed to help with this “bottleneck” by providing one-on-one support for learning argumentation. A number of ITS argumentation systems have been developed in areas as diverse as science, classroom, and legal argumentation. Most of the prior systems use graphical representations to help students relate their points to one another and visually depict the evolving argument. For instance, one student might make a “claim,” using one type of shape, another student makes a “counterclaim,” using another type of shape, and the second student connects the two with an “opposes” link.

Most of the existing ITS systems for argumentation suffer from at least a few drawbacks. First, they tend to be domain specific – they do not work (or at least have not been tested) outside their original domain of application. That is, the chosen domains have peculiarities of argumentation for which prior systems have been specifically developed. Second, they rely on graphical representations that, while very helpful to students in understanding argument structure and following an evolving debate, have a complicated semantics that makes them hard to parse by a computational method, especially as compared to traditional, ”form filling” user interfaces with explicit and well-defined input fields. Finally, even if the graphs were more semantically perspicuous, the various argumentative domains, for example, science, the law, and the classroom, tend to be inherently ill-structured: it is not always clear, even for humans, what constitutes a winning, or even just a reasonable, argument. In many domains, an argument is good only as long as no convincing counter-argument has been presented. For all of these reasons, building intelligent educational technology that supports students’ learning of argumentation skills is very hard, indeed.

Our objective in project LASAD is to create a generalized framework and methodology for the construction of argumentation support systems to help students learn argumentation in different domains. The realization of this goal involves the research of a reusable ontology of argumentation learning objects, a large set of visual, analytic, and pedagogic components that can be combined in different fashions to create different domain-specific argumentation tutoring systems, and the research of an interoperable software system architecture, not specific to a particular domain, that allows the flexible integration of the different researched methods and components.

We address the challenge of building a domain-general ITS for argumentation by distilling the commonalities across the prior research on educational argumentation support systems and extracting generalizable design patterns, ITS principles, and intelligent analysis techniques of these systems. These will be transformed into a customizable and interoperable set of methods that can be used, in the form of a software-based system, to support flexibly and effectively the learning of argumentation in different domains.

The pedagogical effectiveness – i.e., the effect on student’s learning of argumentation skills – and the domain generality (i.e., the applicability to different argumentation domains) of three prototypes developed using our framework and methodology will be empirically tested in a formal lab experiment in one domain (scientific argumentation) and evaluated by experts in two other domains (legal argumentation and engineering ethics).

Partners

TU Clausthal/Clausthal University of Technology

Publications about the project

  1. An Analysis and Feedback Infrastructure for Argumentation Learning Systems

    Oliver Scheuer; Bruce McLaren; Frank Loll; Niels Pinkwart

    In: Vania Dimitrova; Riichiro Mizogushi; Benedict du Boulay; Art Graesser (Hrsg.). Proceedings of the 14th International Conference on Artificial Intelligence in Education. International Conference on Artificial Intelligence in Education (AIED-09), June 6-10, Brighton, United Kingdom, Pages 629-631, IOS Press, 7/2009.

Sponsors

DFG - German Research Foundation

DFG - German Research Foundation