This paper presents a system architecture for user-prefered modalities with separation of service logics and interfaces. Nowadays a lot of web services like book shopping by form and information appliances like remote controllable VTRs or rice cookers are used. Most of their interfaces are GUIs regardless of users' properties in terms of users' physical characteristic, preferences and usage context. There are several solutions to deal with this modality problem. Some of them are multimodal interfaces development tools. However, It is hard to infer the properties. The purpose of our system is to provide the environment where users can use services with their own interfaces according to the situations. The interfaces are separeted from traditional services, and their connections are established on demand. The descriptions of interfaces must be highly abstracted to render multimodal interfaces. Thus we developed the language named Abstract Interaction Description Language (AIDL). To render actual interfaces from a description, the system relies on the intelligent information technology to reason about the user's preferences and renderer's capabilities.
1)Notice for the use of this material The copyright of this material is retained by the Information Processing Society of Japan (IPSJ). This material is published on this web site with the agreement of the author(s) and the IPSJ. Please be complied with Copyright Law of Japan and the Code of Ethics of the IPSJ if any users wish to reproduce, make derivative work, distribute or make available to the public any part or whole thereof.