[About] [Objective] [Example scenarios] [Services] [Datasets] [Publications]


There is an abundance of user generated content in the Social Web, offering a rich source of individual user viewpoints by concetrating a wealth of diverse opinions and experiences embedded in textual comments. Capturing and analysing user viewpoints in order to explore this diversity can be very usefull for user modelling, personalisation and adaptation. The key challenge is to design and develop qualitative methods and tools that will enable deeper reasoning in order to exploit the user viewpoints in user adaptive systems. This work aims to address the following research questions:

(RQ-1) Can Semantic Web technologies be used to process the user generated content in order to derive viewpoints?

(RQ-2) How can a user's viewpoint be formally represented and what are the main viewpoint components to be modelled?

(RQ-3) Can we develop methods and techniques to automatically extract, analyse and compare user viewpoints?

(RQ-4) What are the implications for user modelling, adaptation and personalisation?


The user generated content concerns textual contributions in Social Spaces such as YouTube. To address RQ-1, a semantic annotation and augmentation framework has been developed - ViewS - that exploits state-of-the-art Information Extraction methods and techniques to process the content. The semanticcally augmented content is then visualised in the semantic space that comprises ontologies which represent the desired dimensions. The visualisation can be used to address RQ-2, thus to inform future development including the required representation components of a viewpoint, and the design of automatic computational methods for extracting, analysing and comparing viewpoints in RQ-3.

The long-term objective in RQ-4 is to identify and evaluate how user viewpoints in the Social Web can be used to inform the design and development of user adaptive systems.

Example scenarios

The viewpoint semantics together with the user profile characteristics can be used to augment and enrich existing user models with elements (viewpoints) from the Social Web. Profiles used in in the ImREAL simulators can be aligned (or matched) with profiles from Social Web either for enrichment or sensing in order to bypass some of the cold start limitations regarding the individual conceptualisations of each learner.

Another example scenario is that the semantic output of the augmentation can be used by a simulator designers in order to enrich the simulation content with elements fom the social web. In this case, concepts from the semanrtically augmented content can be used as seeds for specific simulation steps and shape a simulation context that will reflect the underlying viewpoint. Having a user model in the simulation, the designer can also expose the simulator learner in diverse situations by comparing his/her viewpoint with other's based on the two profiles, the simulator's and the Social Web profile.

A simulator designer uses social web content to get ideas what activity aspects to include/expand in the simulation and selects a set of videos on YouTube which represent the activity to be simulated The ViewS framework extracts the comments from these videos and the designer then uses the visualiser Visual-ViewS plus the ontology(ies) representing the activity aspect(s) (Body Language and Emotion) to browse the content. Example cases include:

--Show the group viewpoint on a particular video set (identify areas which people are discussing and include these in the simulation, e.g. either in the situation or in the possible options for the user)

--Select two user groups and compare their viewpoints (e.g. people with limited experience 16-25 vs people with extensive experience over 60; or gender or location based) - this can help for the adaptation (e.g. providing different options)


The ViewS semantic augmentation framework is available as a standalone Java application, as well as the visualisation tool - ViewS Microscope.

The tools have been configured to process (semantically annotate) and visualise content which includes terms that can be linked to Social Signals in Interpersonal Communication. This includes Emotion and Body Languge related concepts. These two dimensions are represented by ontologies that are used for the Information Extraction with ViewS and for the visualisatrion of the semantic viewpoint spaces.


The annotation demo software is available for download here:AnnotationDemo. The application size is large (~6Gb) due to the large dictionary files from Wikipedia, used for enriching the semantic augmentation process.
Unzip the folder and execute the ViewSAnnotationDemo.jar file. You will need to have Java installed in your machine as well as atleast 3Gb of RAM. As the screenshot shows below, including an example, you can add some text in the input text field and press the Run button to generate the annotations. An annotations list will be populated. An XML output is also available which can be used as input in the visualisation tool, by copying and pasting into a text editor and saving it with the .xml file extension.

Screenshot of the Semantic Augmentation the ViewS Component

Each annotation element is qualified with the textual token, the dimensions (i.e. Emotion or Body Language), the ontology concept and the sense that the token-concept were extracted from a Lexicon.

The annotation pipeline is depicted in the figure below.

Pipeline for the Semantic Augmentation
Technical References
  1. WordNet Lexicon
  2. WordNet-Affect Emotion Taxonomy
  3. Suggested Upper Merged Ontology
  4. DISCO Similarity
  5. Jena Semantic Web Framework

ViewS Microscope

The visualisation software is available for download here: ViewSMicroscope.
Unzip the folder and execute the ViewSToolkit.jar file. You can load ontologies from files (under the Files directory) or online. Visualise the semntic maps of annotated data, extract viewpoint focus and compare viewpoints.

Screenshotsa of the ViewS Microscope software for browsing the viewpoint semantic spaces

You can load an annotated dataset from the File menu. A data set is provided with the application and can also be found in the Datasets section of this page in the extended size of 600 videos. It comprises, videos related to job interview, such as exemplars and instructional, comments and user profiles from YouTube. The textual comments have been semantically augmented with ViewS. The figure below depicts the methodoly followed for content collection and filtering.

The data collection methodology
Technical References
  1. Prefuse Visualisation Library
  2. Jena Semantic Web Framework
  3. YouTube Java Data API 2.0


The collection of comments on job interview videos in YouTube in a controlled user study, as well as the validation/evaluation data (merged) are available here to download
A collection of 600 YouTube video URIs related to job interview situations, the corresponding comments and user profiles are available here to download. The coprus has been Semantically Augmented with ViewS and can be used as input to ViewS Microscope Please note that in case you use this file it takes much time to load and perform selections.


  1. DESPOTAKIS, D., DIMITROVA, V., LAU, L. & THAKKER, D. 2013. Semantic Aggregation and Zooming of User Viewpoints in Social Media Content. In: CARBERRY, S., WEIBELZAHL, S., MICARELLI, A. & SEMERARO, G. (eds.) User Modeling, Adaptation, and Personalization. Springer Berlin Heidelberg.
  2. DESPOTAKIS, D., DIMITROVA, V., LAU, L., THAKKER, D., ASCOLESE, A. & PANNESE, L. 2013. ViewS in User Generated Content for Enriching Learning Environments: A Semantic Sensing Approach. Artificial Intelligence in Education (AIED). Memphis, Tennessee, USA.
  3. DIMITROVA, V., STEINER, C. M., DESPOTAKIS, D., BRNA, P., ASCOLESE, A., PANNESE, L. & ALBERT, D. To Appear. Semantic Social Sensing for Improving Simulation Environments for Learning. European Conference on Technology Enahanced Learning (ECTEL), 2013.
  4. DESPOTAKIS, D., THAKKER, D., DIMITROVA, V. & LAU, L. 2012. Diversity of user viewpoints on social signals: a study with YouTube content. International Workshop on Augmented User Modeling (AUM) in UMAP. Montreal, Canada.
  5. DESPOTAKIS, D. 2011. Multi-perspective Context Modelling to Augment Adaptation in Simulated Learning Environments. In: KONSTAN, J., CONEJO, R., MARZO, J. & OLIVER, N. (eds.) User Modeling, Adaption and Personalization. Springer Berlin / Heidelberg.
  6. DESPOTAKIS, D., LAU, L. & DIMITROVA, V. 2011. A Semantic Approach to Extract Individual Viewpoints from User Comments on an Activity. International Workshop on Augmented User Modeling (AUM) in UMAP. Girona, Spain.
Dimoklis Despotakis, University of Leeds, School of Computing, E-mail: scdd@leeds.ac.uk
Contributors to the work:
Dr. VG Dimitrova, Dr. Lydia Lau, Dr. Dhaval Thakker