Assistive technology rapid integration and construction set

From 2010-01-01 to 2012-12-31

More than 2,6 million people in Europe have problems with their upper limbs and therefore many of them depend on Assistive Technologies (AT) to access a PC or any other ICT, including mobile technology, and to control their environment (to switch on equipment, to open doors etc).
One problem of existing AT devices is that they cannot be adapted or combined to support individual capabilities of the user. Very often, these devices work in a defined setup and their function cannot be changed or applied in different use cases.
The aim of the project has been to develop a prototype for a flexible AT system that provides greater adaptability to specific needs of the user by combining emerging sensor techniques like Brain-Computer Interfaces, gaze-tracking systems etc. People with reduced motor capabilities will get a flexible and adaptable technology at hand which enables them to access the Human-Machine-Interfaces (HMI) at the standard desktop but in particular also of embedded systems like mobile phones or smart home devices.


Brain-neural computer interfaces on track to home Development of a practical generation of BNCI for independent home use.

From 2012-01-01 to 2015-06-30


Research efforts have improved Brain-Neural Computer Interface (BNCI) technology in many ways and numerous applications have been prototyped. Until recently, these BNCI systems have been researched almost exclusively in laboratories. Home usage has been demonstrated, though only with on-going expert supervision. A significant advance on BNCI research and its implementation as a feasible assistive technology is therefore the migration of BNCIs into people s homes to provide new options for communication and control that increase independence and reduce social exclusion. The goal of BackHome is to move BNCIs from laboratory devices for healthy users toward practical devices used at home by people in need. This implies a system which is easy to set up, portable, and straightforward. Thus, BackHome will (1) develop BNCI systems into practical multimodal assistive technologies that will provide useful solutions for communication, web-surfing, and environmental control, and (2) provide this technology for home usage with minimal support. These goals will be attained through three key developments: practical electrodes; telemonitoring with home software support; and easy-to-use applications tailored to people s needs. BackHome will build on on-going projects in the FP7 BNCI cluster that laid the foundations for this project and provide us with a network of connections and resources that will be valuable in the project. The consortium combines extensive experience with software development, definition of standards, neuroscience and psychology research methods, user-centred approaches and training users in their homes. We will leverage this experience to get BackHome started quickly, maintain solid interactions with end users, and interact effectively with other key research groups. We will evaluate, disseminate and plan future exploitation of the BackHome scientific and technical results in close interaction with end-users. BackHome will thus have a strong impact on European dominance in the field, in the short and longer term, and could make a real difference not only for the end-users targeted but also for caregivers, support personnel, and medical professionals.


BNCI-driven robotic physical therapies in stroke rehabilitation of Gait disorders

From 2010-02-01 to 2013-01-31


Cerebral vascular accident (CVA, Stroke) is the most prevalent neurological condition leading to physical impairment in Western society. About 4.7 million stroke survivors are alive today. Impaired walking ability contributes to post-stroke walking disability. Walking incorrectly creates a stigma and makes patients more susceptible to injury, affecting quality of life. Most promising interventions to restore walking function are based on robotic systems that intend to restore function by focusing on actions at periphery of the body (a BOTTOM-UP approach). It is not clear how effective these treatments are and a major problem is non-compliance or non-adherence to the therapy.

The main objective of the project is to improve physical rehabilitation therapies of gait disorders in stroke patients based on Brain-Neural Computer Interaction (BNCI) assistive technologies, improving systems, providing guidelines for further improvements, and developing benchmarking tools.

The project will validate, technically, functionally and clinically, the concept of improving stroke rehabilitation with robotic exoskeletons based on a TOP-DOWN approach: motor patterns of the limbs are represented in the cortex, transmitted to the limbs and fed back to the cortex:
-The system will provide means to assess patient adherence to therapy through a multimodal BNCI.
-The proposed BNCI will combine multiple levels of neural information with the resulting motion (biomechanical) data.
-It will determine if training the activation of signals that control lower limb tasks in combination with robotics devices is beneficial for restoring lower limb function.
-BETTER will provide means for objective evaluation of the BNCI-based physical rehabilitation therapy and its usability and acceptability.

BETTER proposes a multimodal BNCI which main goal is to explore the representations in the cortex, characterize the user involvement and modify the intervention at the periphery with ambulatory and non-ambulator.


Autonomy and social inclusion through mixed reality Brain-Computer Interfaces: Connecting the disabled to their physical and social world

From 2010-01-01 to 2012-12-31

Motor disabilities of people arising from any origin have a dramatic effect on their quality of life. Some examples of neurologic nature include a person suffering from a severe brain injury resulting from a car collision or individuals who have suffered a brain stroke. For years, the severely disabled have learned to cope with their restricted autonomy, impacting on their daily activities like moving around or turning on the lights and ability for social interaction.

The project is about empowering them and pursues to mitigate the limitations of the everyday life to which they are confronted to. BrainAble is an innovative platform designed with a user centric approach to improve physical and social independence, facilitate active living and improve quality of life of people with different degrees and types of disabilities and potentially anybody with special needs.  It is a modular system which facilitates the interaction of humans with computers through the last generation of Brain Computer Interfaces (BCI), which require no training, easy setup, and adaptive configurations to meet any user requirements, especially those of the severely disabled. Furthermore, a user with evolving functional diversity is offered to interact with BrainAble using alternative assistive technologies combined or not with BCI techniques. 
Through BrainAble, a disabled user may now interact with other people using email, facebook or twitter; control a wheelchair, lights, TV, blinds and doors; play games and navigate virtual communities; use and enjoy a range of digital devices and services which were not designed to be used by disabled people and which BrainAble offers in a smart, context-aware and assistive way.
BrainAble has been developed by a multidisciplinary team of therapists, carers, engineers and researchers in the frontier of neuroscience, signal processing, assistive technologies and machine learning; and is already impacting the growing market of accessible, inclusive and assistive products from a novel perspective.    


Coordination action in R&D in accessible and assistive ICT

From 2010-03-01 to 2013-02-28


The major aim of the Coordination Action would be to improve the overall success of Challenge 7 ICT 2009 7.2 Accessible and Assistive ICT. It would assist those involved in the research aspects of Challenge 7 by identifying the research and development that is needed in this field, both immediately and in the near future, by raising the level of knowledge and understanding of Accessible and Assistive ICT and by stimulating companies, research institutions and individual experts to become involved in this important area.

The Coordination Action would support the corresponding STREP and IP in this research area, creating a knowledge network between them.


Deployment of Brain-Computer Interfaces for the Detection of Consciousness in Non-Responsive Patients

From 2010-02-01 to 2013-04-30 | 


The deployment of Brain-Computer Interfaces (BCI) for non-responsive patients will provide access to modern information and communication technology such as internet, personal computer or home appliances when only a single response of a person is available. In this extreme case, no current assistive technology can help the patient interact with the environment. This situation poses serious ethical issues, since medical treatment can prolong the patients life, but leave them in a state of unacceptable quality of life.

DECODER will develop a BCI into single-switch based systems to practically enhance inclusion of patients who are otherwise only little or not at all able to interact with their environment and share ICT. This achievement will move on from the improvement of three components of state-of-the-art BCIs, i.e. signal acquisition (input), signal classification and signal translation (output) and adapt them to the specificities of non-responsive patients such as low arousal, short attention span, and altered electrical activity of the brain. A forth component is the application; existing assistive technology will be adapted to a single-switch control.

Besides classic EEG paradigms near-infrared spectroscopy will be used for signal acquisition due to its higher spatial resolution. Potential and automated software will identify the best signal for each user and will optimize signal translation. Prior to providing such patients with ICT an unequivocal diagnosis is of utmost importance to define the most appropriate rehabilitation strategy and most suitable supportive technology for interaction.

A hierarchical diagnostic approach starting with simple presentation of stimuli to intentional control of BCI will be developed, validated and disseminated. By implementing existing well-established and currently developed tools at all levels of the BCI and bringing together a multidisciplinary team we can ensure the achievement of the goals of DECODER.


Future Directions in Brain/Neuronal Computer Interaction (BNCI) Research

From 2010-01-01 to 2011-12-31


Brain Computer Interface (BCI) systems allow communication through direct measures of brain activity. Users can spell, move cursors, browse the internet, and control robotic devices such as prosthetics or wheelchairs with thought alone. BNCI systems are similar, but can also rely on indirect measures of brain activity. Rapid progress in BCI and BNCI research is creating a number of new opportunities across a much wider range of potential users than previously recognized.

Unfortunately, the many new developments and new research groups lead to two problems. First, key terms and definitions are confusing, outdated, nonexistent, and/or only sporadically accepted by people and groups from different backgrounds. There is no clear agreement on fundamental terms like BCI and BNCI. This confusion impedes development of standards and roadmaps for both academic and commercial efforts. It is also difficult to organize conferences, workshops, special journal editions, expositions, or electronic collaboration resources for a field that is unknown and poorly defined. Second, there are widely differing views on how to capitalize on recent progress and which avenues for future development merit the most attention. These problems will only get worse without an effective coordination effort. It seems unlikely that any project can align constituencies and prepare future joint researches and roadmaps when the relevant disciplines and stakeholders are not even clearly identified. Future directions in BNCI systems (Future BNCI) will identify which opportunities are (and aren't) promising across all four components of a BCI system: sensors and signals; signal processing; applications and devices; and interfaces and operating environments. These four components will be addressed through three COORD WPs, and another COORD WP will discuss Standards and Dissemination.

This project will establish and entrench key terms, definitions, and standards. Knowledge will be disseminated through a conference, workshops, special sessions, a book through a major publisher, other peer reviewed publications, and informal interactions with key stakeholders in BCI and related research. A website will promote both commercial and academic development with links, downloadable materials, and free information about BCI basics, research groups and people, conferences, news events, and publications.

These advancements will counteract the growing confusion, miscommunication, inefficiency, and stakeholder fragmentation within BCI research, and establish the foundations necessary to transform BCIs from their infancy to a mature, coordinated, mainstream, high impact research and development endeavour. The end result will be a coherent, efficient BCI community capable of making a strong impact on EU dominance and helping a greatly expanding number of potential users.


Gentle User Interfaces for Disabled and Elderly Citizens

From 2010-02-01 to 2013-01-31


GUIDE develops a toolbox of adaptive, multi-modal user interfaces (UIs) that target the accessibility requirements of elderly users in their home environment, making use of TV set-top boxes as processing and connectivity platform beside the common PC platform. With its software, hardware and documented knowledge, this toolbox will put developers of ICT applications in the position to easier implement truly accessible applications using the most recent user interface technologies with reduced development effort. For this purpose, the toolbox provides the technology of advanced multi-modal UI components as well as the adaptation mechanisms necessary to make UI components interoperable with legacy and novel applications, including the capability to self-adapt to user needs.

Following a user-centered approach, user studies investigate optimum combinations of UIs and their adaptation in selected ICT applications for all relevant individual accessibility requirements. The scope of disabilities covers a majority of the ageing population that suffer mild visual, auditory, speech and motor impairments. Because the project targets support tools for application development rather than a specific application, a user model is developed that represents the results of the user studies in a generalized way by virtual user profiles.

This user model forms the core of a smart adaptation layer (SAL). The SAL can adapt an ensemble of UI interfaces to a given ICT application and specific user needs at runtime, using the virtual user representation compiled from the user studies. By integrating an ICT application with the SAL, GUIDE UI components become reusable not only on a software level, but also regarding accessibility requirements. Moreover, a simulation engine included in the toolbox predicts both the perception and the interaction of virtual impaired users, raising ICT application developers awareness to the accessibility problems of their target users.


Mind controlled orthosis and virtual reality training environment for walk empowering

From 2010-01-01 to 2012-12-31 | 


A lack of mobility often leads to limited participation in social life. The purpose of this STREP is to conceive a system empowering lower limbs disabled people with walking abilities that let them perform their usual daily activities in the most autonomous and natural manner. New smart dry EEG bio-sensors will be applied to enable lightweight wearable EEG caps for everyday use. Novel approaches to non-invasive BCI will be experimented in order to control a purpose-designed lower limbs orthosis enabling different types of gaits. Complementary research on EMG processing will strengthen the approach. A Virtual Reality (VR) training environment will assist the patients in generating the correct brain control signals and in properly using the orthosis. The main BCI approach relies on Dynamic Recurrent Neural Network (DRNN) technology applied in a two stages process. After learning, the system will be able to match EMG signals to legs movements (Stage 2), and EEG to such EMG signals (Stage 1). The Stage 2 has already been successfully demonstrated by a project partner. The orthosis will be designed to support the weight of an adult, to address the dynamic stability of a body-exoskeleton combined system, and to enable different walking modalities. The VR training environment will comprise both a set of components for the progressive patient training under a safe and controlled medical environment, and a lightweight portable set using immersive VR solutions for self-training at home. The developed technologies will be assessed and validated with the support of a formal clinical validation procedure. This will allow to measure the strengths and weaknesses of the chosen approaches and to identify improvements required to build a future commercial system. In addition the resulting system will be progressively tested in everyday life environments and situations, ranging from simple activities at home to eventually shopping and interacting with people in the street.


MUltimodal Neuroprostesis for Daily Upper limb Support

From 2010-03-01 to 2013-02-28 | 


MUNDUS is an assistive framework for recovering direct interaction capability of severely motor impaired people based on arm reaching and hand function. Most of the solutions provided by Assistive Technology for supporting independent life of severely impaired people completely substitute the natural interaction with the world, reducing their acceptance. Human dignity and self-esteem are more preserved when restoring missing functions with devices safeguarding self perception and first hand interaction while guaranteeing independent living.

MUNDUS uses any residual control of the end-user, thus it is suitable for long term utilization in daily activities. Sensors, actuators and control solutions adapt to the level of severity or progression of the disease allowing the disabled person to interact voluntarily with naturality and at maximum information rate.

MUNDUS targets are the neurodegenerative and genetic neuromuscular diseases and high level Spinal Cord Injury.

MUNDUS is an adaptable and modular facilitator, which follows its user along the progression of the disease, sparing training time and allowing fast adjustment to new situations. MUNDUS controller integrates multimodal information collected by electromyography, bioimpedance, head/eye tracking and eventually brain computer interface commands. MUNDUS actuators modularly combine a lightweight and non-cumbersome exoskeleton, compensating for arm weight, a biomimetic wearable neuroprosthesis for arm motion and small and lightweight mechanisms to assist the grasp of collaborative functional objects identified by radio frequency identification.

The lightness and non cumbersomeness will be crucial to applicability in the home/work environment.

Specific scenarios in the home and work environment will be used to assess, subjectively and quantitatively, the usability of the system by real end-users in the living laboratory facility.


MyUI: Mainstreaming Accessibility through Synergistic User Modelling and Adaptability

From 2010-02-01 to 2012-07-31


MyUI will foster the mainstreaming of accessible and highly individualized ICT products a major issue for e-Inclusion. The project addresses important barriers which include developers lack of awareness and expertise, time and cost requirements of incorporating accessibility and missing validated approaches and infrastructures. The project s approach goes beyond the notion of Universal Design by addressing specific user needs through adaptive personalized interfaces. An ontology-based context management infrastructure will collect user and context information in real-time during use. Sharing the collected information across several personal applications will increase efficiency and validity.

The user interface will self-adapt to the evolving individual user model, in order to fit the user s special needs and preferences. The MyUI adaptation engine will rely on empirically based design patterns for specified user and context characteristics. Providing support for developers is a key goal of the project. Accessibility guidelines, re-usable interface components and a virtual environment for illustration, training and monitoring will support the mainstreaming of accessibility in ICT products. An interface adaptation engine and simulation tools will help developers test, assess and refine their designs. MyUI technologies will be implemented in three selected use cases to demonstrate their benefit and feasibility in industrial development contexts: interactive TV device, interactive digital physiotherapy service, and interactive socialization service. End-users and developers will be involved in all stages of the project.

Empirical research will determine the design of user models, design patterns, and adaptation mechanisms. Iterative development and research cycles will ensure that the MyUI artefacts will increase accessibility and that they can be adopted efficiently by the industry. Field tests will validate the project approach in realistic environments.


Training young Adult's Regulation of emotions and Development of social Interaction Skills

From 2011-11-01 to 2014-10-31


The number of young people not in employment, education or training (NEET) is increasing across Europe. Current research reveals that NEETs often lack self-confidence and the essential social skills needed to seek and secure employment. Youth inclusion associations across Europe provide social coaching programmes, in order to help young people acquire and improve their social competencies. However, it is an expensive and time-consuming approach that relies on the availability of trained practitioners as well as the willingness of the young people to engage in exploring their social strengths and weakness in front of their peers and practitioners. Digital technologies such as serious-games offer the advantage of repeatable experience that can be modulated to suit the individual needs of the young people. Additionally, such technologies are intrinsically motivating to the young and carry the potential of removing the many barriers that real-life situations may pose, in particular the stress associated with engaging in unfamiliar interactions with others. TARDIS aims to build a scenario-based serious-game simulation platform for young people at risk of exclusion, aged 18-25, to explore, practice and improve their social skills.

TARDIS will facilitate the interaction through virtual agents (VAs) acting as recruiters in job interviews scenarios. The VAs are designed to deliver realistic socio-emotional interactions and are credible, yet tireless interlocutors. TARDIS exploits the unique affordances of digital technology, by creating an environment in which the quality and the quantity of emotional display by the agents can be modulated to scaffold the young trainees through a diverse range of possible interview situations. The scenarios are co-designed with experienced practitioners in several European countries in order to ensure their relevance to the different individuals across a number of cultural contexts. TARDIS offers three major innovations. First, it will be able to detect in real-time user's emotions and social attitudes through voice and facial expression recognition, and to adapt the progress of the game and the behaviour virtual interlocutors behaviour to the individual users. Second, it will provide field practitioners with an intuitive authoring tool for designing appropriate interview scenarios and for setting agents behaviours without the help of computer scientists. Third, it will give practitioners a unique access to a systematic record of the specific difficulties that the users experience. This will offer new instruments for practitioners to measure individual's progress in emotion regulation and social skill acquisition, thus facilitating reflection on their own practice and enabling a more flexible and personalised coaching for young people at risk of social exclusion.


Virtual and Augmented Environments and Realistic User Interactions To achieve Embedded Accessibility DesignS

From 2010-01-01 to 2013-12-31


VERITAS aims to develop, validate and assess an open framework for built-in accessibility support at all stages of ICT and non-ICT product development, including specification, design, development and testing. The goal is to introduce simulation-based and VR testing at all stages of product design and development into the automotive, smart living spaces, workplace, infotainment and personal healthcare applications areas. The goal is to ensure that future products and services are being systematically designed for all people including those with disabilities and functional limitations. Specifically, VERITAS will develop:
- An Open Simulation Platform (OSP) for testing at all development stages that will provide automatic simulation feedback and reporting for guideline/methodologies compliance and quality of service.
- detailed virtual user physical, cognitive, behavioural and psychological models as well as the corresponding simulation models to support simulation and testing at all stages of product planning and development.
- accessibility support tools at all the stages of iterative planning and development (i.e. specification, design, development, testing, evaluation) and for the five new application areas.
- virtual simulation environments for ICT and non-ICT products offering tools for testing and verification mainly at the design stage but also during the development stages when links to ICT technologies are implemented.
- a VR simulation environment for realistic and iterative testing providing simultaneous multimodal (visual, aural, etc.) feedback to the designer/developer as well as the potential for immersive realistic simulation and virtual persona testing (i.e. the developer taking the role of the end user).
- a simulation environment that will support multimodal interface virtual testing in realistic scenarios that will offer the opportunity to fine tune and adapt these technologies to the specific application.


Virtual User Concept for Supporting Inclusive Design of Consumer Products and User Interfaces

From 2010-01-01 to 2012-06-30


The needs of people with sensory or dexterity impairments are generally not well considered when designing user interfaces (UIs) for mainstream consumer products. The majority of existing interfaces and controls rarely fulfil the accessibility requirements of users suffering from visual, hearing, and dexterity impairments. It is also common for an individual to have multiple impairments than just one; this is particularly prevalent among older people. A combination of these impairments creates a far greater problem when interacting with a product than just one.

The audience for VICON will be older people who have age-related (mild to moderate) impairments (age-related hearing loss, macular degeneration, etc) rather than those with profound impairments. This group of people do not want (or require) specialist assistive devices but mainstream consumer products. However they fully benefit from consumer products, when their UIs incorporate accessible multimodal interaction capabilities providing good usability. It is unrealistic for a mainstream manufacturer to have a detailed understanding of these issues and design appropriately, due to the complexities of singular and multiple age-related impairments. Therefore their inclusivity knowledge has to be supported from a third party solution. VICON will conduct extensive user research to build an advanced Virtual User Model that will reflect the requirements of this group, when designing a product or UI.

The Virtual User model will accompany the entire design process and support the client throughout, so that the needs of this audience are address at every stage; conceptualisation, product and UI specification, virtual testing and prototype evaluation.