Annika Flycht-Eriksson

A Survey of Knowledge Sources in Dialogue Systems

[Full Text]
[send contribution]
[debate procedure]
[copyright]


Overview of interactions

No Comment(s) Answer(s) Continued discussion
1 24.2.00 McConachy Zukerman
22.3.00 Annika Flycht-Eriksson
30.3.00 Zukerman

C1. Richard McConachy and Ingrid Zukerman (22.2.00):

We found this paper very informative. The descriptions of what each model means and its role in the different systems are clear and well presented. However, it would be good if you could expand on some ideas with more grounded examples. For instance, statements like ``the presence of an explicit model of the system tasks ... can make the dialogue more fluent ... '' (.ps version page 6, column 1) require such examples. There are several such statements in the paper.

Six of the seven systems described in the paper can very loosely be grouped together as `directly aiding the user with some practical query'. VERBMOBIL is a little different in that it acts as an intermediary between two people rather than dealing directly with a user's queries itself, but it still could be loosely grouped in the `aid with query' category. Do you think that this similarity of system focus tends to make the usage and interplay of the various models more similar than would be the case if intelligent tutoring systems or argumentation systems were included? (we are ignoring the extra models these systems bring forward which you quite reasonably have elected to leave alone). In the conclusion, you suggest defining a taxonomy of dialogue types as a next step in this line of research. We think it is a good idea to start with systems that fall under the loose `aid and inform' umbrella. What are your views about incorporating wider-coverage systems, such as argumentation and tutoring?

Additional (more detailed) questions

References
Zukerman, I., McConachy, R., and Korb, K. (1998), Bayesian Reasoning in an Abductive Mechanism for Argument Generation and Analysis. In AAAI98 Proceedings -- the Fifteenth National Conference on Artificial Intelligence, Madison, Wisconsin, pp. 833-838, AAAI Press.


A1. Annika Flycht-Eriksson (22.3.00):

Thank you for your many interesting comments. I hope we can continue discussing some of the issues you raise.

Comments:

We found this paper very informative. The descriptions of what each model means and its role in the different systems are clear and well presented. However, it would be good if you could expand on some ideas with more grounded examples. For instance, statements like ``the presence of an explicit model of the system tasks ... can make the dialogue more fluent ... '' (.ps version page 6, column 1) require such examples. There are several such statements in the paper.

The authors reply:

I will try to insert more examples. Regarding the system task models it can be examplified by the following dialogue:

In this dialogue the system has to deal with three different tasks. U1 initiate the system task of finding a bus trip, U2 opens another task, to give information about the domain, and U3 initiate yet another task, that of providing system information. With the use of explicit task models the system can easily switch between tasks and continue the task initiated by U1 and resumed by the user in U4.

The task of providing trip information is fairly complex since the user has to specify a number of different parameters. In this case a system task model also helps the system to decide what information it has to request from the user, but still it can be very flexible since the information does not have to be provided in any specific order.

Ingrid's rejoinder:

I believe the above example would be clearer if you analyzed it also from the view point of a dialogue model. For instance, S1 starts a clarification subdialogue, and U2 constitutes an indirect reply to this clarification. This reply can be read as composed of two parts:

reply: I want to go from the railway station.
confirmation: are there any bus stations there?
How would this match with the task model?

Rejoinder Answer

I think that how and when a system task model is used is coupled to how the system interprets utterances and the dialogue model used by the system. The dialogue presented above is an example of a dialogue with a dialogue system that has a grammar-based dialogue model, which does not try to model user intentions. U2 is considered to be a separate although relevant question and information about the departure location is therefore not incorporated in the system task model until the user explicitly states that (s)he wants to go from there (utterance U4).

If one takes U2 to be an indirect reply to S1 as you suggest, the system task model should be updated with the information that the departure location is the railway station and then answer the question of whether there is any nearby bus stop. However, this approach is problematic if there is no nearby bus stop, which mean that the user can not leave from the suggested location. In such a case the user has to explicitly state that (s)he do not want to go from there or the system itself has to take the initiative to retract the information about departure location from the system task model.

Comments:

Six of the seven systems described in the paper can very loosely be grouped together as `directly aiding the user with some practical query'. VERBMOBIL is a little different in that it acts as an intermediary between two people rather than dealing directly with a user's queries itself, but it still could be loosely grouped in the `aid with query' category. Do you think that this similarity of system focus tends to make the usage and interplay of the various models more similar than would be the case if intelligent tutoring systems or argumentation systems were included? (we are ignoring the extra models these systems bring forward which you quite reasonably have elected to leave alone).

The authors reply:

I have to admit that my knowledge about tutoring and argumentation system is limited, but I think that for some of the models their roles differ between these kinds of systems and the ones described in my paper. User models are probably used differently and may be related to the other models in other ways. Another difference is the distinction between system and user tasks that I think will be different in tutoring and argumentation systems.

Comments:

In the conclusion, you suggest defining a taxonomy of dialogue types as a next step in this line of research. We think it is a good idea to start with systems that fall under the loose `aid and inform' umbrella. What are your views about incorporating wider-coverage systems, such as argumentation and tutoring?

The authors reply:

I agree that it is a good idea to start with the aid and inform type of systems. However, it would be very interesting to expand it to include also argumentation and tutoring. A possible approach is to construct a set of features that can be used to describe the functionality of the former type of systems and see how these features can be mapped to different models. An example feature could be the ability to handle several different tasks, which can be achieved by the use of explicit system task models. It would then be possible to expand the set of features to include also argumentation and tutoring systems. Some of the features will probably be common for all type of systems while other might be specific for one type, which might be reflected in the use of a specific model.

Additional (more detailed) questions

The reference to [Zukerman and McConachy, 1993] is not accurate. That system deals with descriptions, not with argumentation. An appropriate reference may be [Zukerman et al., 1998].

Your distinction of the different dimensions of dialogue systems is a bit blurred (.ps version, page 1). You mix the dimensions themselves with their actual values, e.g., grammar-based and plan-based are two values for the `approach' dimension, general-purpose and domain-specific are two values for `applicability', and so on. Can you please make this more precise?

The authors reply:

The dimensions should be: purpose of usage, approach to dialogue managment, applicability, modalities, and type of task. The values are then: commercial or research, plan-based or grammar-based, general purpose or domain specific, speech only or multimodal, and information retrieval or task planning, repsectively.

Comments:

We are a bit confused by the VERBMOBIL discussion in page 2 (.ps version). The description makes the dialogue planner sound like a plan-based module, but then you say it is grammar based.

The authors reply:

The last sentence in paragraph 4 in the right hand column should be left out. The dialogue planner IS a plan-based module.

Comments:

In your discussion of domain models and conceptual models you support the claim by Dahlback and Jonsson that often one of these models is enough, but in a few cases both are necessary. It would be interesting if you could provide a characterization of the circumstances or types of systems where one is enough or both are necessary.

We think your characterization of tasks should be qualified. You say that ``a user's task is non-linguistic and takes place in the real world''. This is true for information retrieval and data base systems, but not so for tutoring or argumentation systems.

The authors reply:

The distinction is important primarily for the type of systems of 'simple service' character described in the paper. For other type of systems it might be more or less important to maintain the distinction and it may have to be done in another way.

Ingrid's rejoinder:

I am not sure your reply answers the comment in the first paragraph. Can you plz elaborate?

Rejoinder Answer

I think that the distinction between conceptual and domain model is very interesting and something that deserves more investigations. I can not give a detailed characterisation of when the different types of models are needed but I think that for simple information retrieval systems where domain knowledge only is used to interpret user utterances and to find the right concepts for access of a database a conceptual model is sufficient. If the background system consist of more diverse and complex knowledge sources a domain model is probably needed. The need for conceptual and domain models thus seems to depend on the kind of background system used. There are of course other factors that has to be considered, for example the sub-language used and how the user refer to different objects.

Comments:

We disagree with your comment that information about underlying intentions is not necessary for [an information retrieval] system to be able to respond appropriately (.ps version, page 5, second column). Consider a situation where the user wants to know the location of a particular bus stop with the intention of taking the bus late at night (but this particular bus line stops running earlier). In this case, giving the location of the bus stop is unsatisfactory.

The authors reply:

This example raise a very interesting issue. There are several possible approaches to handle question like this, and we do not think that one necessarily have to take the users intentions into account. We have for example considered using user task models to capture this kind of information. Another solution could be to make the interaction among different system task models more sophisticated. If the user for example has stated that (s)he want to go by bus late at night, and then asks for a specific bus stop, the system could see the relation between the different tasks and deduce that only bus stops that are passed by busses late at night should be considered.

Comments:

You also say that ``a domain model is in many cases necessary to make the dialogue system natural and intuitive to use'' (.ps version, page 5, second column). It would be helpful if you could distinguish between domain knowledge that is used for dialog only and domain knowledge that is used for the task.

The authors reply:

Actually, I think that the distinction between conceptual and domain model match the different types of domain knowledge you mention. In the quote above I am referring to domain knowledge used for the task, although the other type of domain knowledge represented in a conceptual model can be used for the same purpose.

Ingrid's rejoinder:

Do you mean that the conceptual model is used both for dialogue and for the task, while the specific model is used only for the task? (Just making sure I understand.)

Rejoinder Answer

I have not come across any system that in practice make a distinction between conceptual and domain models, in reality many existing systems, do not even clearly separate dialogue, task and domain knowledge. However, I think this is how it should be, domain knowledge primarily used for interpretation, dialogue and generation should be represented in a conceptual model while domain knowledge primarily used for the task could be modelled separately.

Comments:

In your discussion of the user models and discourse models (Section 4.1), it may be worth considering the idea that the interpretation of the information in the discourse model depends on the information in the user model.

The authors reply:

ok


Additional questions and answers will be added here.
To contribute, please click [send contribution] above and send your question or comment as an E-mail message.
For additional details, please click [debate procedure] above.
This debate is moderated by the guest Editors.