Seminars Recent



The Department has regular research seminars given by internal and prominent external speakers. They are open to all members of the University and other interested parties. The individual research groups also run seminars and group meetings. Details of these can be found on research-group web pages.


TITLE

Understanding Complex Co-evolutionary Systems

SPEAKERDr Siang Yew Chong
PROFILEDr. Siang Yew Chong (PhD in Computer Science, University of Birmingham, UK) is an Associate Professor with the University of Nottingham, Malaysia Campus, and also a Marie-Sklodowska Curie Fellow with the University of Birmingham, UK. He was a recipient of the IEEE Computational Intelligence Society Outstanding PhD Dissertation Award in 2009, and IEEE Transactions on Evolutionary Computation Outstanding Paper Award in 2011. His research interests include evolutionary computation, evolutionary game theory, and recently on theoretical analysis of performance and dynamics in coevolutionary systems.
ABSTRACTFor a computational system in problem solving to exhibit intelligence, it must be able to adapt its behavior to predict future outcomes and take the appropriate decision over a range of environments. Co-evolutionary computation has been introduced as an attractive, alternative framework for agents to learn such behaviors through a natural process of selection and variation that is guided mostly by their interaction outcomes. However, there are broad fundamental issues for understanding complex co-evolutionary systems involving their performance and search dynamics. This talk will introduce theoretical advancements we have made to address the issues of rigorous: (i) quantitative performance analysis in co-evolutionary computation in the context of generalization from machine learning, and (ii) analysis of co-evolutionary dynamics in the context of structural stability from dynamical systems. In the first part, performance statistics in co-evolutionary learning are formally defined with corresponding confidence bounds derived, which allow for construction of principled and efficient generalization estimators for various applications ranging from analysis to design of co-evolutionary systems. In the second part, the shadowing property in co-evolutionary dynamics is defined and a condition for shadowing of population dynamics under co-evolution is formally established. The effects of finite precision computation and finite population associated with numerically generated trajectories of co-evolutionary processes with different selection mechanisms can be studied rigorously.
DATE2017-11-06
TIME14:10:00
PLACELecture Theatre MP-0.15 Physical Sciences Building


TITLE

Brain Inspired Computing

SPEAKERDr Stephen Lynch Home Page : Manchester Metropolitan University : Panopto Recording :
PROFILEStephen was born in Liverpool in 1964 and studied Pure Mathematics at UCW Aberystwyth where he also studied for his PhD with Professor Noel Lloyd. He was one of the first mathematicians in the world to research using symbolic maths packages and he is now an expert user of Python, Maple, Mathematica and MATLAB. Stephen has authored several Maths books published by Springer and his MATLAB book is the tenth most downloaded Springer Maths book in the world! Stephen is a Fellow of the Institute of Mathematics and Its Applications (FIMA) and a Senior Fellow of the Higher Education Academy (SFHEA). He is a Senior Lecturer with Manchester Metropolitan University and was an Associate Lecturer with the Open University (2008-2012). He was instrumental in establishing a Schools Liaison forum in the North West of England and in 2010 Stephen volunteered as a STEM Ambassador and in 2014 he became a Speaker for Schools. His research area is in Dynamical Systems and he is a world class leader in the use of Maths packages in teaching, learning, assessment, research and employability. Stephen is the co-inventor of binary oscillator computing with patents covering the UK and the US.
ABSTRACTThe average human brain consists of about 100 billion neurons connected by around a thousand trillion synapses – it is the most powerful computer known and yet only consumes about 25 Watts of power. Two mathematicians from Manchester Metropolitan University have invented a way to perform conventional computing using brain dynamics. The invention has potential applications in two scientific fields. In Computing, it could lead to the building of the world’s most powerful supercomputer using Josephson junctions to replace transistors. In Biology, it could provide an assay (test circuit) for cell degradation to help with drug testing for neurological conditions such as Alzheimer’s disease, Parkinson’s disease and epilepsy.
DATE2017-10-09
TIME14:10:00
PLACELecture Theatre A6 - Llandinam Building


TITLE

The lifetime of an object - an object’s perspective onto interactions

SPEAKERDr. Dima Damen - University of Bristol
PROFILEDima Damen is Lecturer (Assistant Professor) in Computer Vision at the University of Bristol. She received her PhD from the University of Leeds (2009). Dima's research interests are in the automatic understanding of object interactions, actions and activities using static and wearable visual (and depth) sensors. Dima co-chaired BMVC 2013, is area chair for BMVC (2014-2017) and associate editor of Pattern Recognition and IET Computer Vision. In 2016, Dima was selected as a Nokia Research collaborator. She currently supervises 7 PhD students, and 2 postdoctoral researchers.
ABSTRACT

As opposed to the traditional notion of actions and activities in computer vision, where the motion (e.g. jumping) or the goal (e.g. cooking) is the focus, I will argue for an object-centred perspective onto actions and activities, during daily routine or as part of an industrial workflow. I will present approaches for the understanding of ‘what’ objects one interacts with, ‘how’ these objects have been used and ‘when’ interactions takes place.

The talk will be divided into three parts. In the first part, I will present unsupervised approaches to automatic discovery of task-relevant objects and their modes of interaction, as well as automatically providing guidance on using novel objects through a real-time wearable setup. In the second part, I will introduce supervised approaches to two novel problems: action completion – when an action is attempted but not completed, and expertise determination - who is better in task performance and who is best. In the final part, I will discuss work in progress on uncovering labelling ambiguities in object interaction recognition including ambiguities in defining the temporal boundaries for object interactions and ambiguities in verb semantics.

DATE2017-07-10
TIME14:10:00
PLACEHugh Owen - Lecture Theatre D5


TITLE

Computational Biology in Potato Pathology: From Chips to Chips

SPEAKERDr. Leighton Pritchard - The James Hutton Institute
PROFILEI graduated in 1996 from the University of Strathclyde with a first degree in Forensic and Analytical Chemistry, and remained there to complete my PhD with Mark Dufton in bioinformatics on snake venom toxin sequence-structure-function relationships (which spun out a drug design algorithm that is still in use). From there I moved to Aberystwyth in 1999 to work with Doug Kell, modelling yeast glycolysis and directed evolution. In 2003 I left to take up a bioinformatics position at the Scottish Crop Research Institute, working on microbial plant pathogens and major genomics projects for the first enterobacterial plant pathogen to be sequenced, and the globally-significant pathogen Phytophthora infestans. At SCRI and the James Hutton Institute (formed from a merger of SCRI and the Macaulay Land Use Research Institute) my computational biology research has covered bacterial, oomycete and nematode plant pathogens, and potato genomics. I currently have active projects focusing on plant-pathogen interactions, and the persistence and spread of human and animal pathogens in agricultural and natural environments. I co-supervise PhD students at the Universities of Dundee (Phytophthora diagnostics and monitoring), St Andrews/IBioIC (synthetic and structural biology of virulence enzymes, for industrial biotechnology) and Galway (environmental E. coli), and am co-investigator on a major national collaboration with Forest Research, the Centres for Ecology and Hydrology, and the Universities of Edinburgh and Worcester to identify and evaluate threats to trees from Phytophthora spp., and make recommendations on nursery practice. My work for the Scottish Government focuses on environmental phylogenomics and diagnostics, with input to policy. I am a badged Software and Data Carpentry instructor, and have taught at the Universities of Dundee and Strathclyde, and EMBL-EBI.
ABSTRACTIf it weren’t for destruction of crops by plant pathogens, we could feed two billion extra mouths each year. In this presentation, I’ll describe how at the James Hutton Institute we are using computational biology to try to make a dent into the societal impact of potato diseases: building classifiers to identify the components of plants and their pathogens that control whether disease develops; using brute-force computational methods to mine public genome databases and develop accurate diagnostic tools for identifying pathogens; developing algorithms to improve genome assembly through difficult-to-resolve regions and get extra value from large public sequence databases; and using graph theory to redefine, and perhaps even overturn, long-standing taxonomic classifications of potato pathogens and other bacteria.
DATE2017-06-19
TIME14:10:00
PLACEHugh Owen - Lecture Theatre D5


TITLE

From Random Projections to Learning Theory and Back

SPEAKERDr Ata Kaban - University of Birmingham Presentation Slides :
PROFILEAta Kaban is a senior lecturer in Computer Science at the University of Birmingham UK, and EPSRC Early Career Fellow. Her research interests include statistical machine learning and data mining in high dimensional data spaces, algorithmic learning theory, probabilistic modelling of data, and black-box optimisation. She authored / co-authored 80 per-reviewed papers, including best paper awards at GECCO'13, ACML'13, ICPR'10, and a runner-up at CEC'15. She was recipient of an MRC Discipline Hopping award in 2008/09. She holds a PhD in Computer Science (2001) and a PhD in Musicology (1999). She is member of the IEEE CIS Technical Committee on Data Mining and Big Data Analytics, and vice-chair of the IEEE CIS Task Force on High Dimensional Data Mining.
ABSTRACTWe consider two problems in statistical machine learning -- an old and a new:
  • Given a machine learning task, what kinds of data distributions make it easier or harder? For instance, it is known that large margin makes classification tasks easier.
  • Given a high dimensional learning task, when can we solve it from a few random projections of the data with good-enough approximation? This is the compressed learning problem.
This talk will present results and work in progress that highlight parallels between these two problems. The implication is that random projection -- a simple and effective dimensionality reduction method with origins in theoretical computer science -- is not just a timely subject for efficient learning from large high dimensional data sets, but it can also help us make a previously elusive fundamental problem more approachable. On the flip side, the parallel allows us to broaden the guarantees that hold for compressed learning beyond of those initially inherited from compressed sensing.
DATE2017-06-05
TIME14:10:00
PLACEHugh Owen - Lecture Theatre D5


TITLE

Interval Type-2 Fuzzy Decision Making

SPEAKERBob John, ASAP, LUCID, School of Computer Science, University of Nottingham Presentation Slides : Panopto Recording :
PROFILE

Prof. Bob John joined the University of Nottingham in 2013 as the Head of the Automated Scheduling, Optimisation and Planning research group on the LANCS initiative in the School of Computer Science. Bob is a member of the newly formed Lab for Uncertainty in Data and Decision Making. He is a member of the EPSRC Peer Review College and a Senior Member of the IEEE. A leading researcher in type-2 fuzzy logic, he was Co- General Chair of 2007 FUZZ-IEEE International Conference. He is a member of various editorial boards and associate editor of Soft Computing. He won Outstanding IEEE Transactions on Fuzzy Systems paper 2010 (for a paper in 2007). With over 6000 Google scholar citations, and an h-index of 36 Bob has an international research profile in the field of theoretical and practical fuzzy logic. He currently holds grants from the EU and Innovate UK.

ABSTRACTThis talk will start with an introduction to type-2 fuzzy sets and some of the pros and cons of their deployment. The body of the talk will present some new ideas on using interval type-2 fuzzy sets to inform decision making. In particular Bob will discuss how interval type-2 fuzzy sets can be used where risk that the decision maker is prepared to take is reflected in the decision making process. This research is new and he will present two simple examples to show how this approach.
DATE2017-05-08
TIME14:10:00
PLACEHugh Owen - Lecture Theatre D5


TITLE

On Computable Numbers with an Application to the AlanTuring Problem

SPEAKERDr Catrin Huws - Aberystwyth Law School Panopto Recording :
PROFILEDr Catrin Fflur Huws is a Senior Lecturer in Law at Aberystwyth University. The unifying theme of her diverse research into law and art, law and theatre, law and literature, law and computing, and law and linguistics is how words are constructed and interpreted. Accordingly, she has published a number of papers on the interpretation of bilingual legislation, and the interpretation of legislation that impliedly references other legislation. Her work on bilingual law-making was included as part of the Law Commission’s recommendations to the UK Government on the Form and Accessibility of the Law Applicable in Wales, and her recommendations to the Welsh Government led to the redrafting of the Welsh Language Tribunal Regulations. Her play ‘To Kill A Machine’- about the life and work of Alan Turing toured the UK in 2015 and 2016, and was nominated for six awards and was a finalist for the Arch and Bruce Birch Award.
ABSTRACTThis paper explores the extent to which the law is computable in Alan Turing’s conception of computability, and argues that, although the law aspires to computability via its internal processes, Alan Turing’s own experiences of the law demonstrate that the influences on the law’s decision-making processes are not finite in number, and that the law is not necessarily calculated with reference to factors intrinsic to the law itself. Despite the claims made therefore regarding the possibility of robot lawyers, this paper challenges that notion, and explains how Alan Turing’s arrest and trial occurred within a legislative framework that remained constant, but within a social, political and economic context that cannot be predicted.
DATE2017-03-20
TIME14:10:00
PLACEHugh Owen - Lecture Theatre D5


TITLE

Are you worth your weight in citations?
An Assessment of Scientific Impact Metrics and Proposed Improvements

SPEAKERJames Ravenscroft : A Cognitive Solutions Agency
University of Warwick Panopto Recording of Live Presentation : Profile Page :
PROFILEJames is an Aberystwyth University Computer Science Alumnus who graduated with a First Class Hons BSc Artificial Intelligence and Robotics in 2013. As a graduate, James worked at IBM where he was eventually promoted to UK Watson Solutions Architect. In June 2016, James left IBM to co-found a Machine Learning consultancy company called Filament, where he is the CTO. As of September 2015, James has been working part time on a PhD in Natural Language Processing and is jointly supervised by Maria Liakata at the University of Warwick and Amanda Clare at Aberystwyth University.
ABSTRACT

How do scientists and their research affect the world around us? Scientists in academia are most frequently measured in terms of metrics such as h-index, i-index, citation count and more recently altmetric scores. However, these metrics are more suited to measuring the spread of knowledge amongst scientists than the impact of research on the wider world.

In this seminar, James will explain his research into metrics for scientific impact beyond academia in areas such as the economy, society, healthcare and legislation, coined "comprehensive impact". He will discuss the current state of academic and comprehensive impact metrics, his recent computational study into how Research Excellence Framework (REF) impact scores and academic impact metrics interact and the implications of these relationships on future REF assessments. He will also suggest ways in which new comprehensive impact metrics could be developed using information retrieval and textual enrichment techniques.

DATE2017-03-06
TIME14:10:00
PLACEHugh Owen - Lecture Theatre D5


TITLE

Differential Evolution and Applied Optimization Domains

SPEAKERProfessor Aleš Zamuda - University of Maribor (UM), Slovenia Home Page :
PROFILEAsst. Prof. Dr. Ales Zamuda received B.Sc. (2006), M.Sc. (2008), and Ph.D. (2012) degree in computer science from University of Maribor. He is affiliated with Faculty of Electrical Engineering and Computer Science at University of Maribor. His journal publications include differential evolution, evolutionary computer vision, evolutionary robotics, energy, and artificial life. He is an IEEE member and also reviewer for thirty several scientific journals and fifty conferences and an associate editor at Swarm and Evolutionary Computation
ABSTRACTDifferential Evolution (DE) algorithm will be presented as a comutational technique for numerical optimization, stochastically evolving populations of vectors to find a fit to a vector function. Then, recent versions of the algorithm and its studies will be listed. Some domains applying DE will then be discussed, like trajectories design and deep sea exploration with autonomous robotics, spatial evolutionary computer vision and morphologies of trees, and power plants short-term scheduling. Then, some more real-world industry challenges will be covered with prospects for future, like socio-technical systems connecting robotics and swarm algorithms.
DATE2017-02-20
TIME14:10:00
PLACEHugh Owen - Lecture Theatre D5


TITLE

Learning from Temporal Data Through Learning in the Space of Dynamical Systems

SPEAKERProfessor Peter Tino, University of Birmingham Home Page :
PROFILE

Professor Peter Tino (M.Sc. Slovak University of Technology, Ph.D. Slovak Academy of Sciences) was a Fulbright Fellow with the NEC Research Institute, Princeton, NJ, USA, and a Post-Doctoral Fellow with the Austrian Research Institute for AI, Vienna, Austria, and with Aston University, Birmingham, U.K. Since 2003, he has been with the School of Computer Science, University of Birmingham, U.K., where he is currently a full Professor - Chair in Complex and Adaptive Systems. Peter was a recipient of the Fulbright Fellowship in 1994, the UK–Hong-Kong Fellowship for Excellence in 2008, three Outstanding Paper of the Year Awards from the IEEE Transactions on Neural Networks in 1998 and 2011 and the IEEE Transactions on Evolutionary Computation in 2010, and the Best Paper Award at ICANN 2002 and IDEAL 2016. He serves as associated editor of IEEE TNNLS, IEEE TCYB, Scientific Reports and Neural Computation. His current research interests include: theoretical underpinning of existing machine learning methodologies; inter-disciplinary applications of Machine Learning; Learning in the model space; modelling of brain imaging data across multiple spatial and temporal scales; learning with privileged information; analysis of population level complex dynamics; adaptive state space models.

ABSTRACT

In learning from "static" data (order of data presentation does not carry any useful information), one framework for dealing with such data is to transform the input items non-linearly into a feature space (usually high-dimensional), that is "rich" enough, so that linear techniques are sufficient. However, data such as EEG signals, or biological sequences naturally comes with a sequential structure. I will present a general dynamical state space model that effectively acts as a dynamical feature space for representing temporally ordered samples. I will then outline a framework for learning on sets of sequential data by building kernels based such dynamical filters. The methodology will be demonstrated in a series of sequence classification tasks and in an incremental temporal "regime" detection task.

DATE2016-12-05
TIME14:10:00
PLACEHugh Owen Lecture Theatre A14


TITLE

Sketched Visual Narratives for Image and Video Search

SPEAKERDr John Collomosse - University of Surrey
PROFILE

Dr John Collomosse is a Senior Lecturer in the Centre for Vision Speech and Signal Processing (CVSSP) at the University of Surrey. John joined CVSSP in 2009, following 4 years lecturing at the University Bath where he also completed his PhD in Computer Vision and Graphics (2004). John has spent periods of time at IBM UK Labs, Vodafone R&D Munich, and HP Labs Bristol.

John's research is cross-disciplinary, spanning Computer Vision, Computer Graphics and Artificial Intelligence, focusing on ways to add value and make sense of large, unstructured media collections - to visually search media collections, and present them in aesthetic and comprehensible ways. Recent projects spanning Vision and Graphics include: sketch based search of images/video; plagiarism detection in the arts; visual search of dance; structuring and presenting large visual media collections using artistic rendering; developing characters animation from 3D multi-view capture data. John holds ~70 refereed publications, including oral presentations at ICCV, BMVC, and journal papers in IJCV, IEEE TVCG and TMM. He was general chair for NPAR 2010-11 (at SIGGRAPH), BMVC 2012, and CVMP 2014-15 and is an AE for C&G and Eurographics CGF.

ABSTRACT

The internet is transforming into a visual medium; over 80% of the internet is forecast to be visual content by 2018, and most of this content will be consumed on mobile devices featuring a touch-screen as their primary interface. Gestural interaction, such as sketch, presents an intuitive way to interact with these devices. Imagine a Google image search in which specify your query by sketching the desired image with your finger, rather than (or in addition to) describing it with text words. Sketch offers an orthogonal perspective on visual search - enabling concise specification of appearance (via sketch) in addition to semantics (via text).

In this talk I will present a summary of my group's work on the use of free-hand sketches for the visual search and manipulation of images and video. I will begin by describing a scalable system for sketch based search of multi-million image databases, based upon our Gradient Field HOG (GF-HOG) descriptor. I will then describe how deep learning can be used to enhance performance of the retrieval. Imagine a product catalogue in which you sketched, say an engineering part, rather than using a text or serial numbers to find it? I will then describe how scalable search of video can be similarly achieved, through the depiction of sketched visual narratives that depict not only objects but also their motion (dynamics) as a constraint to find relevant video clips.

The work presented in this talk has been supported by the EPSRC and AHRC between 2012-2015.

DATE2016-11-28
TIME14:10:00
PLACEHugh Owen Lecture Theatre A14


TITLE

Advanced Techniques for Feature Extraction in Hyperspectral Imaging

SPEAKERDr Jinchang Ren, University of Strathclyde
PROFILE

Jinchang Ren is a Senior Lecturer in the Dept of Electronic and Electrical Engineering, University of Strathclyde. He received the PhD degree in Electronic Imaging and Media Communication from the University of Bradford, United Kingdom in 2009. Before that, he obtained D. Eng. in Computer Vision, M.Eng. in Image Processing and Pattern Recognition and B. Eng. in Computer Software all from Northwestern Polytechnical University (NWPU), China, in 2000, 1997 and 1992, respectively.

Dr. Ren has published over 150 peer-reviewed research papers in prestigious international journals and conferences. His research interests include image processing and analysis, intelligent multimedia information processing; visual computing; computer vision; content-based image/video management, retrieval and understanding; pattern recognition; human-computer interaction; visual surveillance; archive restoration; motion estimation; and hyperspectral imaging.

ABSTRACTAlthough hyperspectral imaging has been widely applied in a number of application areas such as remote sensing, precision agriculture, mining and surveillance, food/drink inspection, pharmaceutical, material, and security. One fundamental problem is feature extraction from the hypercube, which has severely constrained its applicability. In this talk, several key techniques for feature extraction in hyperspectral imaging are reported, including structured PCA, folded-PCA and singular spectrum analysis. Experimental results on several remote sensing datasets are presented to show the efficacy of these techniques.
DATE2016-10-17
TIME14:10:00
PLACETBC


TITLE

Social-Aware D2D Communication Underlaying Cellular Network: Where Mobile Network Meets Social Network

SPEAKERProfessor Sheng Chen, University of Southampton
PROFILESheng Chen is Professor in Intelligent Systems and Signal Processing at Electronics and Computer Science, the University of Southampton. He is a Fellow of the United Kingdom Royal Academy of Engineering, a fellow of IEEE and a fellow of IET. He is an ISI highly cited researcher in the engineering (March 2004).

Professor Chen has wide research interests, including adaptive signal processing, wireless communications and networks, modelling and identification of nonlinear systems, neural network and machine learning, intelligent system design, evolutionary computation methods and optimisation.

ABSTRACTBirth of social networks to a large extent is owing to mobile networks. Ever-increasing huge volume of mobile traffics are generated from social networks. A key component of LTE-A is device-to-device communications to supports mobile content downloading. New generation mobile network will be D2D underlaying cellular network. In this talk, we demand a ``payback'' time from social networks, and examine how to exploit the structure and characteristics of social networks for designing better future generation of D2D underlaying mobile networks.
DATE2016-10-10
TIME14:10:00
PLACEHugh Owen Lecture Theatre A14


TITLE

Complexity of the n-Queens Completion Problem

SPEAKERProfessor Ian Gent
PROFILEIan Gent is professor of Computer Science at the University of St Andrews with research interests in Artificial Intelligence. He recently realised that he started studying Artificial Intelligence half way through the history of the field, since it was founded in 1956 and he started his M.Sc. in the area in 1986. His research has usually been on how to search for solutions combinatorial problems.
ABSTRACT

The n-Queens problem is to place n chess queens on an n by n chessboard so that no two queens are on the same row, column or diagonal in either direction. This is one of the most famous puzzles there is, and is often - incorrectly - attributed to Gauss. It has very often been used as a benchmark for combinatorial search methods, and also very often criticised as a bad test cases [e.g. see *]. The reason for the criticism is that a solution can be computed in time O(n) for any n > 3. We show that this criticism does not apply to the completion variant of the problem. That is, given m queens which do not attack each other on an n by n chessboard, can we add n-m queens to get a solution of the n queens problem? We show that this problem is NP-Complete and #P-Complete. We also report how difficult the n-Queens completion problem is on random problems, and thereby seek to rescue the n-Queens problem - in its completion version - as a valid benchmark problem.

* see: CheatingOnTheNQueensBenchmark

DATE2016-07-04
TIME15:10:00
PLACEHugh Owen - Lecture Theatre D5


TITLE

Swarm Robotics as a Tool to Study Collective Behaviors in Biological Systems

SPEAKERProfessor Eliseo Ferrante
PROFILEDr. Eliseo Ferrante holds a Research Chair position at the Heudiasyc CNRS Laboratory of the Université de Technologie Compiègne (UTC - France). He owns a Ph.D. in Applied Sciences delivered by the Université Libre de Bruxelles (ULB) in 2013, a master and a bachelor degree in Computer Science Engineering from Politecnico di Milano (Italy), and a Master of Science in Computer Science from the University of Illinois at Chicago (USA). Dr. Ferrante’s research focuses on swarm robotics studies from an interdisciplinary perspective comprising computational, statistical physics, and evolutionary models of collective behaviors. Some of the phenomena he studies include collective motion, task specialization, and collective decision-making in animals, artificial agents and robots.
ABSTRACT

Swarm robotics studies the design of collective behaviors for swarms of robots. From the engineering perspective, it relies on collaboration and self-organization to solve problems in large unstructured or unpredictable environments. However, it also has been proven to be useful to study scientific questions about collective behaviors in animals and the evolution of those. In this talk, I will talk about two projects of mine where robot swarms have been used to model the coordinated motion behavior as seen in birds and fish, and the evolution of self-organized task specialization in ants.

In the first of these two projects, we first developed a method that allowed a swarm of robots to move in an a common direction without the need of global information and of exchanging directional information. The method was then analyzed from the statistical physics perspective, and resulted in a model of self-organized collective motion that we call AES (Active Elastic Sheet). AES is based only on attraction-repulsion interactions as opposed to alignment-only interactions that characterize the standard statistical physics model of collective motion (the Vicsek model). In contrast with the Vicsek model, in a followup work we also showed that AES is able to reproduce the same type of scale-free correlations as observed in natural starling flocks.

In the second project, I will present a study on the evolution of task specialization and task partitioning. In the study, we evolved for the first time the task allocation mechanism as well as the individual behavior needed to carry out the individual sub-tasks. I will show the implications of my studies on both engineering and biology.

I will conclude by quickly presenting some ongoing project and future plans of research, for which collaborations are sought!

DATE2016-06-20
TIME14:10:00
PLACEMP-Physics B


TITLE

NLP Techniques for Processing Scientific Articles and Sentiment in Social Media

SPEAKERProfessor Maria Liakata (University of Warwick) Panopto Presentation Recording :
PROFILE

Maria has a DPhil from the University of Oxford on learning pragmatic knowledge from text and her research interests include text mining, natural language processing (NLP), related social and biomedical applications, analysis of multi-modal and heterogeneous data (text from various sources such as social media, sensor data, images) and biological text mining. Her work has contributed to advances in knowledge discovery from corpora, automation of scientific experimentation and automatic extraction of information from the scientific literature. She has published widely both in NLP and interdisciplinary venues.

Maria is researching NLP for social science. She holds an IBM Faculty Award for studying “Emotion sensing using heterogeneous mobile phone data” and is a co-investigator on the EU Project PHEME, which studies the spread of rumours in social media. She is also a co-I on an IBM Faculty award for developing a course on Big Data ethics. Maria is leading a project funded by the Warwick Innovation Fund to diagnose and monitor dementia using text analysis.

ABSTRACTI will present an overview of my recent work, which uses NLP and machine learning to analyse scientific papers. I use automatically generated scientific discourse annotations such as Hypothesis, Results, Conclusion, Method and more to create summaries for the articles, provide more efficient search and automatically characterise article type (e.g. review article, research paper, etc.). I will also discuss our work in social media analysis for identifying sentiment targeted to specific entities, such as politicians or concepts such as immigration.
DATE2016-06-06
TIME14:10:00
PLACEMP-Physics B


TITLE

Discriminative Spectral Imaging Research at Cranfield for Defence, Security and Manufacturing Sectors

SPEAKERDr Peter Yuen (Cranfield University)
PROFILE

Peter has ~30 years of academia and corporate/government laboratories research experiences and he has published over 70 journal and conference papers in semiconductor physics, defence science, Electro-optics including image processing, remote sensing and machine vision. Peter is the Fellow of Institute of Physics (FinstP) and the Fellow of the Institute of Mathematics and Its Applications (FIMA) since 2001, and he is currently a Reader with Cranfield University since he joined in 2007. Peter has been supervisors of ~10 PhD students in the areas of Electro-Optics, Hyperspectral imaging and machine visions. Peter had been the chief investigators of a number of defence related projects such as target detection, Hyperspectral remote sensing, machine vision and counter-terrorism projects.

ABSTRACT

This presentation gives an overview of hyperspectral/multispectral imaging research undertaken in our Shrivenham lab at Cranfield within the last decade, with particular emphasis on the techniques for detection of very minority targets, subpixel target detections, band selections, targets in sub-surface layers and hyperspectral scene simulation. A selection of a representative projects related to this phenomena for defence, security (counter-terrorism) and manufacturing (Procter & Gamble) application will be described in more details. The presentation will talk about techniques for assessing physiological state of person wirelessly, discrimination species of fine powders in tablets/food, imaging technique using coded aperture and spectral unmixing technology.

DATE2016-05-09
TIME14:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

Lexical Inference as a Spatial Reasoning Problem

SPEAKERDr Steven Schockaert (Cardiff University)
PROFILE

Steven Schockaert obtained his PhD from Ghent University in 2008 on the topic of "reasoning about fuzzy temporal and spatial information from the web". For this work, he received the ECCAI Artificial Intelligence Dissertation award as well as the IBM Belgium Prize for Computer Science. He became a lecturer at Cardiff University in 2011, where he is now a senior lecturer. He has written over 100 papers in international journals and conferences. His work has been funded by the European Research Council (ERC), the Engineering and Physical Sciences Research Council (EPSRC), the Leverhulme Trust, and the Research Foundation Flanders (FWO). He is an area editor for Fuzzy Sets and Systems and he is on the editorial board of Artificial Intelligence. He is program co-chair of the 10th International Conference on Scalable Uncertainty Management and special session co-chair of IEA/AEI 2017, and has been on the program committee of over 60 international conferences and workshops.

ABSTRACT

Humans are often able to draw plausible conclusions from incomplete information based on background knowledge about the world. As a substantial part of this background knowledge is of a lexical nature, a crucial challenge in automating plausible reasoning consists in learning how different words are semantically related. In this talk I will argue that most of the lexical relations that we need for plausible reasoning can be identified with qualitative spatial relations in semantic spaces, i.e. high-dimensional Euclidean spaces in which words are represented as geometric objects. This leads us to treat lexical inference as a qualitative spatial reasoning problem, and allows us to combine distributional representations with relation extraction methods and existing lexical resources.

DATE2016-04-18
TIME14:10:00
PLACEEdward Llwyd Room 1.16


TITLE

On the Analysis of Simple Genetic Programming for Evolving Boolean Functions

SPEAKERPietro Oliveto
PROFILE

Pietro S. Oliveto is currently a Vice-Chancellor Fellow and an EPSRC Early Career Fellow at the University of Sheffield, UK. He received the Laurea degree and PhD degree in computer science respectively from the University of Catania, Italy in 2005 and from the University of Birmingham, UK in 2009. From October 2007 to April 2008, he was a visiting researcher of the Efficient Algorithms and Complexity Theory Institute at the Department of Computer Science of the University of Dortmund where he collaborated with Prof. Ingo Wegener's research group. He has worked at the University of Birmingham, respectively, from 2009 to 2010 as an EPSRC PhD+ Fellow and from 2010 to 2013 as an EPSRC Postdoctoral Fellow in Theoretical Computer Science. His main research interest is the time complexity analysis of randomized search heuristics for combinatorial optimization problems. He has published a review paper of the field and two book chapters on the theoretical techniques for the analysis. He has won best paper awards at the GECCO 2008, ICARIS 2011 and GECCO 2014 conferences and got very close at CEC 2009 and at ECTA 2011 through best paper nominations. Dr. Oliveto has given tutorials on the runtime complexity analysis of EAs at WCCI 2012, CEC 2013, GECCO 2013, WCCI 2014, GECCO 2014, GECCO 2015 and SSCI 2015. He has guest-edited special issues of the Springer Journal of Computer Science and Technology and of the Evolutionary Computation journal and has co-chaired the 2015 IEEE symposium on Foundations of Computational Intelligence (FOCI 2015). He is part of the Steering Committee of the annual workshop on Theory of Randomized Search Heuristics (ThRaSH), member of the EPSRC Peer Review College and Chair of the IEEE CIS Task Force on Theoretical Foundations of Bio-inspired Computation.

ABSTRACT

This work presents a first step towards a systematic time and space complexity analysis of genetic programming (GP) for evolving functions with desired input/output behaviour. Two simple GP algorithms, called (1+1) GP and (1+1) GP*, equipped with minimal function (F) and terminal (L) sets are considered for evolving two standard classes of Boolean functions. It is rigorously proved that both algorithms are efficient for the easy problem of evolving conjunctions of Boolean variables with the minimal sets. However, if an extra function (i.e. NOT) is added to F, then the algorithms require at least exponential time to evolve the conjunction of n variables. On the other hand, it is proved that both algorithms fail at evolving the difficult parity function in polynomial time with probability at least exponentially close to 1. Concerning generalisation, it is shown how the quality of the evolved conjunctions depends on the size of the training set s while the evolved exclusive disjunctions generalize equally badly independent of s.

DATE2016-03-14
TIME14:10:00
PLACEHO-A12 Hugh Owen


TITLE

Conjuring Constraint Models

SPEAKERProfessor Ian Miguel (University of St Andrews)
PROFILE

Prof. Ian Miguel was educated as an undergraduate at St Andrews between 1992 and 1996 in the School of Computer Science. He gained an MSc (1997) and then a PhD (2001) from the University of Edinburgh. His PhD thesis received the British Computer Society/Council of Professor and Heads of Computing Distinguished Dissertation Award. Following a period of post-doctoral research at the University of York (2000-2004), Ian was appointed to a Lectureship at the School of Computer Science at St Andrews in 2004. Concurrently, he held a Royal Academy of Engineering/EPSRC Research Fellowship (2004-2009). Ian was promoted to Reader in 2009, and to Professor in 2014. He has researched Constraint Programming throughout his career, recently focusing on automated constraint modelling, and the construction of efficient constraint solvers. Ian is Principal Investigator of the current EPSRC-sponsored project Working Together: Constraint Programming and Cloud Computing (£630K), having previously been Principal or Co-investigator of four other EPSRC grants and an EPSRC CASE for New Academics award, sponsored by Microsoft Research, totalling approx. £3M.

ABSTRACT

Efficient decision-making in the face of problems where thousands of different considerations interlock in complex ways is of central importance to a modern society. Constraints are a natural, powerful means of representing and reasoning about decisions of this kind. For example, in the production of a university timetable many constraints occur, such as: the maths lecture theatre has a capacity of 100 students; no student can attend two lectures at once. Constraint programming offers a means by which solutions to such problems can be found automatically, and proceeds in two phases. First, the problem is modelled as a set of decision variables and a set of constraints on those variables that a solution must satisfy. Then, a constraint solver is used to search for solutions.

There are typically many possible models for a given problem, and the model chosen can dramatically affect the efficiency of constraint solving. This presents a serious obstacle for non-expert users, who have difficulty in formulating a good (or even correct) model from among the many possible alternatives. Therefore, automating constraint modelling is a desirable goal. This talk will describe Conjure, an automated constraint modelling system.

The input to Conjure is a specification in the language Essence, which allows a problem to be specified without making constraint modelling decisions. Using a set of refinement rules, Conjure automatically transforms this specification into a constraint model. We will discuss how useful transformations, such as breaking symmetry, can be performed during the refinement process. Since different refinement paths can produce different models, we will also discuss mechanisms for automatically selecting effective models from the set that Conjure can generate.

DATE2016-02-29
TIME14:10:00
PLACEHO-A12 Hugh Owen


TITLE

Histological Imaging

SPEAKERProfessor Gabriel Landini (University of Birmingham) Panopto Presentation Recording :
PROFILE

Gabriel Landini received the degree in dentistry from the Universidad de la Republica, Uruguay, in 1984 and the PhD degree in oral pathology from Kagoshima University, Japan, in 1991. Currently, he is Professor of Analytical Pathology at the School of Dentistry, University of Birmingham. His research interests include image analysis applied to the histopathology of oral oral cancer, quantitative microscopy, and fractals in relation to pattern formation in neoplastic growth and cell mixing.

ABSTRACTG. Landini, D. A. Randell, S. Fouad, School of Dentistry, University of Birmingham.
H. Mehanna, School of Cancer Studies, University of Birmingham.
A. Galton, Department of Computer Science, University of Exeter.

The development of histology (the study of cells and tissues) has been essential for the understanding the biology, microscopic anatomy and function of tissues and organisms. In histopathology (the application of histology in the identification of disease) most diagnostic decisions rely on the knowledge and experience of expert observers interpreting samples of cell and tissues on the microscope. However, this has one disadvantage: the subjectivity inherent in visual perception makes it sometimes difficult to achieve truly quantitative or strictly reproducible judgements.
This talk will present some of our approaches to develop context-based imaging programs to help advance automated tissue analysis and diagnosis. By 'context-based' here we imply the use of data constructs that first, allow the structure and relations of cells and tissues in samples to be modelled in a way that enable computer programs to subsequently reason about the image contents, and secondly facilitates both quantitative and reproducible data extraction.
We propose using a spatial logic called discrete mereotopology for encoding and querying relations between biologically-relevant entities (e.g. cell nuclei and profiles, cell layers, staining patterns). This enables histologically relevant models (e.g. cells, tissue types and voids) to be explicitly represented and then operated on at a level that has not been possible before. Consequently, traditional pixel-based routines can be augmented with region-based algorithms that become histologically relevant to segmented structures arising in histological imaging.
The approaches outlined in this talk are both translational and applicable to most biological areas using microscopy where quantitative results are required to make evidence-based decisions.

Randell DA, Landini G, Galton A. Discrete mereotopology for spatial reasoning in automated histological image analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (3):568-581, 2013.
Flight R. Landini G, Styles I, Shelton R, Milward M, Cooper P. Automated optimisation of cell segmentation parameters in phase contrast microscopy using discrete mereotopology. Proceedings of the 19th Conference on Medical Image Understanding and Analysis. Lincoln, Jul 15-17, 2015.
Landini G, Randell DA, Galton A. Discrete mereotopology in histological imaging. Proceedings of the 17th Conference on Medical Image Understanding and Analysis. Claridge E, Palmer AD, Pitkeathly WTE eds., 2013  p101-106.
Landini G, Randell DA, Galton A. Intelligent imaging using Discrete Mereotopology. Proceedings of the Fourth ImageJ user and developer Conference, Luxembourg, 24-26 October, 2012.
Randell DA, Landini G. Discrete mereotopology in automated histological image analysis. Proceedings of the Second ImageJ user and developer Conference, Luxembourg, 6-7 November, 2008.
DATE2016-02-15
TIME14:10:00
PLACEHO-A12 Hugh Owen


TITLE

Lifelong Learning for Optimisation

SPEAKERProfessor Emma Hart
PROFILE

Prof. Hart received her PhD from the University of Edinburgh. She currently leads the Centre for Emergent Computing at Edinburgh Napier University where her research focuses on optimisation and continuous learning systems, with an emphasis applying methods from Artificial Immune Systems and HyperHeuristics. She has published extensively in the field of Artificial Immune Systems, with a particular interest in optimisation and self-organising systems such as swarm robotics. Her current interests relate to the development of optimisation algorithms that continuously learn through experience, and how collectives of algorithms can collaborate to form good problem solvers. She also has interests in more theoretical work relating to modelling the immune system to learn more about its computational properties. She is an Associate Editor of Evolutionary Computing, a member of the SIGEVO Executive Board and editor of the SIGEVO newsletter.

ABSTRACT

The previous two decades have seen significant advances in meta-heuristic and hyper-heuristic optimisation techniques that are able to quickly find optimal or near-optimal solutions to problem instances in many combinatorial optimisation domains. Despite many successful applications of both these approaches, some common weaknesses exist in that if the nature of the problems to be solved changes over time, then algorithms needs to be periodically re-tuned. Furthermore, many approaches are likely to be inefficient, starting from a clean slate every time a problem is solved, therefore failing to exploit previously learned knowledge.

In contrast, in the field of machine-learning, a number of recent proposals suggest that learning algorithms should exhibit life-long learning, retaining knowledge and using it to improve learning in the future. Looking to nature, we observe that the natural immune system exhibits many properties of a life-long learning system that could be computationally exploited. I will give a brief overview of the immune system, focusing on highlighting its relevant computational properties and then show how it can be used to construct a lifelong learning optimisation system. The system is shown to adapt to new problems, exhibit memory, and produce efficient and effective solutions when tested in both the bin-packing and scheduling domains.

DATE2015-11-16
TIME14:00:00
PLACELL-G3


TITLE

Glandular Morphometrics for the Profiling of Colorectal Adenocarcinoma

SPEAKERProfessor Nasir Rajpoot Personal Page :
PROFILE

Dr. Nasir Rajpoot is an Associate Professor (Reader) in the Department of Computer Science & Engineering at the Qatar University and also an Associate Professor in Computer Science the University of Warwick, UK. He is the founding Head of the BioImage Analysis (BIA) lab at Warwick. Dr Rajpoot received his PhD in digital image processing from the University of Warwick in 2001. Prior to that, he was a postgraduate research fellow in the Applied Mathematics program at Yale University, USA during 1998-2000. His research interests lie in digital pathology image analysis, multiplex biomarkers in cancer, and pattern recognition. Recent focus of research in his group has been twofold: multi-scale modelling of objects of interest in histology images, and analysis of molecular expression patterns in multi-stain microscopy images. His group has recently won the MITOS-ATYPIA challenge contest on nuclear atypia scoring in breast cancer histology images held in conjunction with ICPR’2014, and was ranked among the top three contestants in the AMIDA challenge contest on mitotic cell detection in breast histology images held in conjunction with MICCAI’2013. Dr Rajpoot has published over 100 articles in the areas of histology image analysis, image coding, and pattern recognition in journals of international repute and in proceedings of highly reputable international conferences. He has chaired several meetings in the area of histopathology image analysis (HIMA). Dr Rajpoot was the General Chair of the Medical Image Understanding and Analysis (MIUA) conference in 2010, and the Technical Chair of the British Machine Vision Conference (BMVC) in 2007. He has guest edited a special issue of Machine Vision and Applications on Microscopy Image Analysis and its Applications in Biology in 2012 and another special issue in the IEEE Transactions on Medical Imaging. He is a Senior Member of IEEE and member of the ACM, the British Association of Cancer Research (BACR), and the European Association of Cancer Research (EACR).

ABSTRACT

Colorectal Adenocarcinoma (CAd), a type of colorectal carcinoma originating from epithelial cells of intestinal glands in colon mucosa, accounts for more than 90% of the colorectal carcinomas diagnosed worldwide. Visual examination of the CAd tissue slides, stained with Hematoxylin & Eosin (H&E), by expert histopathologists remains the gold standard for CAd grading and prognosis. Morphology of intestinal glands and their surrounding context have been routinely used by the histopathologists to determine the aggressiveness of CAd. However, this rich set of information is still under-utilized as a means for assessing the CAd patients’ prognosis as compared to the TNM staging. In this talk, I will describe a stochastic polygons model for the segmentation of glandular structures in histology images of colon tissue. To the best of our knowledge, ours is the first method of its kind that addresses the problem of gland segmentation in healthy colon tissue as well as in various grades of benign and CAd tissue. I will end the talk with an overview of the ongoing research in my group in the area of digital pathology.

DATE2015-10-09
TIME11:00:00
PLACEHugh Owen Lecture Theatre C22


TITLE

Modelling of Chromatic Contrast for Retrieval of Wallpaper Images

SPEAKERProf. Xiaohong Gao (Middlesex University)
PROFILEProf. Xiaohong (Sharon) Gao is current a professor in Computer Vision and Imaging Science. She obtained her PhD degree in Loughborough University on Modelling of Colour Appearance. Subsequently, she worked as a post-doc on retinal images at St. Mary’s Hospital at Imperial College and Brain Images at Addenbrook’s Hospital in Cambridge for 4 years before she joined Middlesex University as an Lecturer. Her current interests include image retrieval, 3D brain images (CT, MR, and PET) and 3D Echocardiograms.
ABSTRACTColour remains one of the key factors in presenting an object and, consequently, has been widely applied in retrieval of images based on their visual contents. However, a colour appearance changes with the change of viewing surroundings, the phenomenon that has not been paid attention yet while performing colour- based image retrieval. To comprehend this effect, in this talk, a chromatic contrast model, CAMcc, is developed for the application of retrieval of colour intensive images, cementing the gap that most of existing colour models lack to fill by taking simultaneous colour contrast into account. Subsequently, the model is applied to the retrieval task on a collection of museum wallpapers of colour-rich images. The web-based demonstration then select ‘MODA’ image folder.
DATE2015-07-06
TIME16:10:00
PLACELecture Theatre B - Physical Sciences Building


TITLE

The Role of Student Communities of Practice in Processing Feedback

SPEAKERDr Stephen Merry (Staffordshire University) Home Page : E-Mail :
PROFILEI graduated from the University of York in 1974 with BA(Hons) Biology and then went straight on to undertake PhD research in the Department of Chemical Sciences at the University of Hertfordshire (then Hatfield Polytechnic). My thesis was entitled ‘Chemical Synthesis and Biological Screening of Some Novel Anticancer Agents’. After completing those studies I undertook postdoctoral research assistantships at St Mary’s Hospital Medical School (University of London) and the Department of Medical Oncology (University of Glasgow) followed by four years in industry working for a diagnostics company. I joined Staffordshire University as Senior Lecturer in Cell Biology in 1993 and continued in that role until 2013. I completed a Post Graduate Certificate in Higher and Professional Education in 1994 and since that time I have become increasingly involved in educational research projects which have investigated the effects of assessment practices on how students learn and, in particular, the role of peer and tutor formative feedback in promoting students’ academic development. My interests in both educational and cell biological research have continued into my retirement.
ABSTRACTFeedback is often conceptualised as the tutor, the expert, feeding back aspects of their expertise which students then assimilate to develop their understanding. This practice seems intrinsically the right thing to do and has been carried out for many years in a relatively unchanging fashion, but little is known as to the process by which student learning from that feedback occurs. This presentation seeks to define the scope of feedback that students receive, to explain the relevance of theories of social learning to student feedback practices and to consider how the curriculum might be changed in order to maximise the intended learning from tutor feedback. Evidence will be presented that students form communities of practice which considerably enrich the feedback that they receive from tutors and may also translate it into unintended meanings. High achieving students show enhanced self-assessment capabilities which enable them to more effectively integrate the diverse feedback that they receive. Hence curriculum changes that promote student self-assessment may be more effective in developing learning than focussing on the quantity and nature of the feedback itself.
DATE2015-06-22
TIME16:10:00
PLACELecture Theatre A14 - Hugh Owen Building


TITLE

Engaging Students With Feedback

SPEAKERDr Paul Orsmond (Staffordshire University)
PROFILEPaul Orsmond is a biologist at Staffordshire University who has published many referred papers, book chapters and a well-received book on assessment and feedback. He has presented at national and intentional conference and mostly recently spoke to the Heads of Biologist conference in Milton Keynes.TBC
ABSTRACTThis presentation will consider three things. Firstly, it will look at some of the concerns regarding student engagement with feedback. Secondly, it will look at how high achieving students and non-high achieving schools use feedback and explore if the differences provide guidance for tutors in making there feedback effective. Lastly it looks at ways in which tutors and students can engage with feedback. In addition, it will examine some of the current literature on feedback, and consider the themes that have been developed from that literature.
DATE2015-06-08
TIME16:10:00
PLACELecture Theatre A14 - Hugh Owen Building


TITLE

Geometry of Variational Problems in Quantum Information

SPEAKERDr Roman Belavkin
PROFILETBC
ABSTRACTVariational problems in information theory are concerned with finding an optimal probability measure that minimises or maximises the expectation of some random variable (i.e. the objective function) on a convex set, defined by a constraint on some information distance from a reference measure. Classical examples of such problems are the maximisation of entropy and the optimisation of channel capacity, where the information distance is usually the Kullback-Leibler divergence. Such problems can also be considered in quantum probability and information theories, but non-commutativity introduces ambiguity in the way the concept of information distance can be defined. We discuss several types of variational problems for quantum information, and how their solutions depend on the definitions of quantum information. We also introduce variational problem of a new type with the constraint on quantum cross-information. If time permits, we shall also discuss how asymmetric information distances introduce asymmetric topologies on the set of state operators.
DATE2015-04-27
TIME16:10:00
PLACELecture Theatre B - Physical Sciences Building


TITLE

Assessment and Feedback to Enhance Learning

SPEAKERProfessor Lin Norton (Liverpool Hope University)
PROFILELin Norton is a National Teaching Fellow, Emeritus Professor of Pedagogical Research at Liverpool Hope University, and a Visiting Professor at the Centre for Higher Education Research and Practice, Ulster University. Formerly, Lin was Dean of Learning and Teaching and Professor at Liverpool Hope University. She is a psychologist by background and has had throughout her career a strong interest in assessment and feedback practices. She has written numerous publications, given conference papers and run workshops in this area: Since ‘retirement’ in 2010, Lin is currently pursuing her research interests on lecturers’ views of assessment, marking and feedback.
ABSTRACTAccording to Havnes (2004) ‘Assessment directs student learning because (for students) it is the assessment system that defines what is worth learning’. In this presentation, Professor Lin Norton will draw on perspectives from international experts in the field (such as David Boud, Royce Sadler and David Nicol) who broadly agree that there is a need for improvement in both assessment and feedback. Their view is supported by UK national performance indicators of quality such as the Quality Assurance Agency (QAA, 2010) and the National Student Survey (NSS) 2005-13, although students have become more satisfied over time (HEFCE, 2014). Against this backdrop, Lin will reflect on over 20 years of her own research into assessment from both the students’ and the academics’ perspectives (Norton & Norton, 2013) and draw some conclusions about how assessment and feedback can be used to enhance student learning.
DATE2015-04-20
TIME16:10:00
PLACELecture Theatre C4 - Hugh Owen Building


TITLE

Long-Term Autonomy in Everyday Environments: A New Challenge for AI and Robotics

SPEAKERNick Hawes
PROFILEDr Nick Hawes is a Senior Lecturer in the School of Computer Science at the University of Birmingham. His research applies techniques from artificial intelligence to allow robots to perform useful tasks for, or with, humans in everyday environments (such as making your breakfast, or supporting nursing staff in a care home). He is particularly interested in how robots can understand the world around them and how it changes over time (e.g. where objects usually appear, how people move through buildings etc.), and how robots can exploit this knowledge to perform tasks more efficiently and intelligently.
ABSTRACTThe performance of autonomous robots, i.e. robots that can make their own decisions and choose their own actions, is becoming increasingly impressive, but most of them are still constrained to labs, or controlled environments. In addition to this, these robots are typically only able to do intelligent things for a short period of time, before either crashing (physically or digitally) or running out of things to do. In order to go beyond these limitations, and to deliver the kind of autonomous service robots required by society, we must conquer the challenge of combining artificial intelligence and robotics to develop systems capable of long-term autonomy in everyday environments. This talk will present an overview of research in this direction, focussing on the mobile robots for security and care domains developed by the EU-funded STRANDS project (STRANDS).
DATE2015-03-16
TIME16:10:00
PLACELecture Theatre B - Physical Sciences Building


TITLE

Exploring Human Hand Motion Capabilities into Prosthetic Manipulation

SPEAKERProfessor Honghai Liu, University of Portsmouth
PROFILEHonghai Liu received his Ph.D degree in robotics from King’s college London, UK. He is currently a Professor of intelligent systems and leads Intelligent Systems and Biomedical Robotics Group in the School of Creative Technologies at the University of Portsmouth, UK. He previously held research appointments at the Universities of London and Aberdeen, and project leader appointments in large-scale industrial control and system integration industry. He is interested in biomechatronics, pattern recognition, intelligent video analytics, intelligent robotics and their practical applications with an emphasis on approaches that could make contribution to the intelligent connection of perception to action using contextual information. He has authored/co-authored more than 200 per-reviewed journals and conference papers.
ABSTRACTIt requires multi-disciplinary efforts to produce an artificial hand with manipulation capabilities of the human hand. Honghai will very briefly go through the state of the art in prostheses, then identify existing challenging issues, finally report research carried out in Portsmouth. He will present a unified computational framework aimed at instilling human hand manipulation skills into an artificial prosthesis, with a focus on sensing, gesture recognition and skill transfer. Examples will be presented of the most recent developments in the Intelligent Systems and Biomedical Robotics Group in the field of prosthetic sensing and skill transfer.
DATE2015-03-04
TIME14:00:00
PLACELecture Theatre B - Physical Sciences Building


TITLE

Studying Human Social Behaviour on Internet Social Media

SPEAKERJohn Bryden QMUL
PROFILEMy background is in modelling social behaviour in a variety of systems. I did my bachelors in Philosophy and Mathematics at Bristol, before working in industry for eight years. I did a Master's and PhD in Leeds where I modelled locomotion behaviour in the nematode C. elegans and the evolution of different reproductive strategies. I am currently a Research Fellow at Royal Holloway where I work modelling altruism in aphids, colony failure in bees, and human behaviour on online social networks.
ABSTRACT

Online social networks give us an unprecedented opportunity to collect large volumes of information to investigate subtle patterns of social behaviour. In this talk I will present work I have done modelling social behaviour and applying these models to test theories from the social sciences that have previously been untested at a large scale. There is a great potential for discovery of new theories of social behaviour by looking for weak signals in these large quantities of data.

First, I will concentrate on a process of how we bias our interactions to similar others: called homophily. We used a model of dynamic networks with stochastic processes to study homophily. Although often portrayed as fixed in time, many real world networks are inherently dynamic, as the edges that join nodes are cut and rewired, and nodes themselves update their states. I will present a model that builds upon existing models of coevolving networks. This model characterizes how dynamic and stochastic behaviour at the level of individual nodes can generate stable aggregate behaviours. An important process in the model is homophily, where nodes tend to rewire to other similar nodes. These results show that homophily in dynamic networks can maintain the stable community structure that has been observed in many social and biological systems.

Following on from this is a study looking at how language and social network structure interlink on Twitter, an online social network. Language has functions that transcend the transmission of information and varies with social context. The study found that the network emerging from user communication on Twitter can be structured into communities, and that the frequencies of words used within those communities closely replicate this pattern. Looking at the word usage of the community members, we found that community members share language features that indicate common interests. This indicates that the Twitter network was formed by a process of homophily, as described by the dynamic network model. This also confirms theory from the field of Socio-Linguistics which argues that people in communities share similar language features.

Finally, I will focus more on the communities themselves, to look for evidence of social identity. We looked at whether the Twitter users change their language according to which community they are communicating with. This tested a theory from social psychology called Communication Accommodation Theory. Following this I also looked for evidence of Convergence which suggests that people have more similar language patterns, the more they talk to one another.

DATE2015-03-02
TIME16:10:00
PLACELecture Theatre B - Physical Sciences Building


TITLE

Facial Expression Synthesis, Perception and Analysis

SPEAKERDr Hui Yu University of Portsmouth
PROFILEDr Hui Yu is a Senior Lecturer with University of Portsmouth. He previously held an appointment with University of Glasgow. His research interests include computer graphics, vision and application of machine learning to above areas, particularly in human computer interaction, human behaviour understanding, affective analysis and geometric processing of human/facial performances. His research has been supported by EPSRC, EU FP7 Programme and university internal fund resources as well. He is on editorial board of international journals and serves as a Program Chair, session chairs/workshop organiser for some international conferences.
ABSTRACTIn this talk I will discuss the research issues on 4D facial expression synthesis and analysis. The main purpose is to generate realistic and “valid” dynamic facial expressions. As human beings are very sensitive to any subtle changes on facial movements, it is critical to make sure the facial movements on 3D face are realistic. We solve this problem using local geometric encoding method to map expressions from real human beings. And we synthesize 4D facial expression information that conveys some messages to the viewer without the use of actor performances. We will discuss how to make sure that the message intended by the dynamic 3D facial expression is actually perceived as such. A 3D facial expression optimization technique will be discussed as well. I will also discuss the cultural influence on facial expressions. Our research findings refute the assumed universality of facial expressions of emotion.
DATE2015-02-16
TIME16:10:00
PLACELecture Theatre B - Physical Sciences Building


TITLE

Think Globally, Act Locally: a patch-based approach to Face Analysis and Synthesis

SPEAKERWill Smith, York University Home Page :
PROFILE

William Smithreceived the B.Sc. degree in computer science, and the Ph.D. degree in computer vision from the University of York. He is now a Senior Lecturer there and a member of the Computer Vision and Pattern Recognition research group. His research interests are in face modeling, shape-from-shading, reflectance analysis and the psychophysics of shape-from-X. He has published more than 80 papers in international conferences and journals, was awarded the Siemens best security paper prize at BMVC 2007, and was finalist as the U.K. nominee for the ERCIM Cor Baayen award 2009. He is an associate editor of the IET journal Computer Vision, and has served as co-chair of the ACM International Symposium on Facial Analysis and Animation in 2010 and 2012, and the CVPR 2008 workshop on 3D Face Processing.

ABSTRACT

In this talk, I will propose a new approach to face modelling and demonstrate its application to face processing problems including super-resolution, texture completion and synthesis. The dominant approach in face modelling over the past 3 decades has been to build statistical models that capture global variations in face appearance. Such approaches include Eigenfaces, Active Appearance Models and 3D Morphable Models. The weakness of these methods is that they only capture the common modes of variation over a face population. Such models fail to describe the finescale details that make faces unique, recognisable and realistic. I propose a model in which a face is constructed as a patchwork of local texture patches copied from real faces. To fit to data, patches are selected over a 3D model using Belief Propagation so as to be consistent with neighbors and with observations via an appropriate image formation model. This approach ensures that synthesised faces are photorealistic and contain plausible high frequency detail. This is advantageous when analysing deficient data such as low resolution images. From a modelling perspective, the conclusion is that, although face identity is unique, local regions are not and a global identity can be constructed by copying locally.

DATE2015-02-02
TIME16:10:00
PLACELecture Theatre B - Physical Sciences Building


TITLE

Seeing the Leaves for the Trees (Or the Crops for the Weeds) 3D machine Vision for Agriculture and Beyond

SPEAKERDr Ian Hales, University of Bristol Centre of Machine Vision
PROFILETo Follow
ABSTRACT

Current agricultural techniques for management of weeds in crop fields often involve the wide-scale spraying of herbicides, which is expensive both economically and environmentally. In addition, an increasing global population requires an increasing crop output, which in turn requires more efficient use of existing agricultural land. By controlling weed growth, a higher yield can be maintained, but in order to reduce the amount of herbicides used to do so, it is important to identify the location and structure of weed clusters growing in a field.

To minimise the use of herbicides, one must first locate the weeds within a crop field. Given time and resources, one can accurately model much of the variation that exists in weeds at particular growth stages using existing, mature computer vision techniques. However, in a real-world scenario resources and time are highly limited. In this talk, I will outline the approaches taken during a recent project with Harper-Adams University for high frame-rate detection analysis of out-of-row weeds in maize crops. I will also address the related problem of broad-leaf dock detection in grassland, offering some insights gained from initial investigation of the issue.

DATE2015-01-19
TIME13:10:00
PLACERm 0.32 IBERS Bldg, Penglais


TITLE

Probability, Fuzziness and Borderline Cases

SPEAKERProfessor Jonathan Lawry
PROFILETo Follow
ABSTRACT

In this talk we will explore the relationship between fuzziness and probability. By considering probability defined over three-valued truth models we introduce a new formalism for uncertainty modelling and decision making concerning vague propositions. This combines the explicit representation of borderline cases with both semantic and stochastic uncertainty, in order to define measures of subjective belief in vague propositions. Within this framework we investigate bridges between fuzzy logic and probability theory. In particular, when the underlying truth model is from Kleene’s three-valued logic then we provide a complete characterisation of compositional min-max fuzzy truth degrees. For classical and supervaluationist truth models we find only partial bridges, with min-max combination rules only recoverable on a fragment of the language. Finally, we consider the issue of conditioning and also propose how this approach can be extended from a propositional to a simple predicate language.

DATE2015-01-19
TIME16:10:00
PLACELecture Theatre B - Physical Sciences Building


TITLE

Using a Colour Camera as a Measurement Device

SPEAKERProfessor Graham D. Finlayson Institute Profile Page :
PROFILE

Graham Finlayson is a Professor of Computer Science at the University of East Anglia. He joined UEA in 1999 when he was awarded a full professorship at the age of 30. He was and remains the youngest ever professorial appointment at that institution. Graham trained in computer science first at the University of Strathclyde and then for his masters and doctoral degrees at Simon Fraser University where he was awarded a ‘Dean’s medal’ for the best PhD dissertation in his faculty. Prior to joining UEA, Graham was a lecturer at the University of York and then a founder and Reader at the Colour and Imaging Institute at the University of Derby. Professor Finlayson is interested in ‘Computing how we see’ and his research spans computer science (algorithms), engineering (embedded systems) and psychophysics (visual perception).

He has published over 50 journal, over 200 referred conference papers and 25+ patents. He has won best paper prizes at several conferences including, “The 5th IS&T conference on Colour in Graphics, Imaging and Vision” (2010) and “the IEE conference on Visual Information Engineering” (1995). Many of Graham’s patents are implemented and used in commercial products including photo processing software, dedicated image processing hardware (ASICs) and in embedded camera software. Graham’s research is funded from a number of sources including government, industry and through investment for spin-out companies. Industrial partners include Apple, Hewlett Packard, Sony, Xerox, Unilever and Buhler-Sortex. Significantly, Graham was the first academic at UEA (in its 50 year history) to either raise venture capital investment for a spin-out company – Imsense Ltd developed technology to make pictures look better - or to make a money for the university when this company was subsequently sold to a blue chip industry major in 2010.

In 2002, Graham was awarded the Philip Leverhulme prize for science and in 2008 a Royal Society-Wolfson Merit award. In 2009 the Royal Photographic Society presented graham with the Davies medal in recognition of his contributions to the Photographic Industry. The RPS made Graham a fellow in 2012. In recognition of distinguished service to the Society for Imaging Science and Technology, Graham was elected a fellow of that society in 2010. In January 2013 Graham was also elected to a fellowship of the Institute of Engineering Technology

ABSTRACT

Most cameras are designed for photography i.e. to take good looking pictures. However, often in computer vision we’d like to use a camera as a measurement device. As in all measurement, the units are important but every camera for every scene will produce a different picture. The putative solution here is to calibrate the camera and, in effect, solve for the mapping which relates the picture of an arbitrary scene to its corresponding appearance under reference viewing conditions. In photographic terms we wish to map ‘jpeg' images back to ‘raw'. Unfortunately, solving for this mapping is not easy. This is in part due to the proprietary nature of camera processing pipelines, the large amount of data needed to perform the calibration and that often the image processing is scene content dependent (the radiometric calibration shifts with every picture taken).

In this talk we present a new method for camera calibration which is based on ranking (the ordering of image RGBs). We will show that arguments based on ranking alone allow the individual steps of the camera processing pipeline (colour correction, tone and gamut mapping) to be isolated and characterized. Further, the method is inherently simple and calibration is possible with a single image of a typical colour chart. The method provides state of the art performance on a variety of datasets tested.

Applications of the work are discussed.

DATE2014-12-01
TIME16:10:00
PLACERoom: EL-0.01
Edward Llwyd Building


TITLE

Data Driven Geometric Processing

SPEAKERDr Yukun Lai School of Computer Science & Informatics - Cardiff :
PROFILE

Dr Yukun Lai is a lecturer in the School of Computer Science and Informatics, Cardiff University where he is a member of the Visual Computing group. He received his PhD degree from Tsinghua University, China in 2008. His research interests include computer graphics, geometry processing, image processing and computer vision. He has published over 50 papers in world-class journals and conferences. He is on the editorial board of The Visual Computer and conference co-chair of Eurographics Symposium on Geometry Processing 2014.

ABSTRACT

The talk will be of particular interest to those working on computer graphics, RGB-D data, shape morphing, learning, 3D scene reconstruction, model retrieval, and global optimization.

High quality geometric models have long been expensive to obtain. With the maturity of low-cost acquisition devices as well as surface reconstruction and modelling techniques, geometric models have proliferated in recent years. The availability of geometric data provides essential knowledge that helps to fill in the gaps for potentially low-quality and incomplete input data. This talk will base on our recent work and demonstrate in computer graphics applications how existing geometric data may help to improve the effectiveness of geometric processing algorithms. This includes automatic semantic modelling of in-door scenes from a sparse set of low-quality RGB-D images by exploiting a scene database, and generating more realistic morphing between shapes guided by a model database. For the former application, contextual relationships learnt from the database are used to constrain reconstruction, ensuring semantic compatibility between both object models and parts. Taking a series of coarsely aligned RGB-D images sparsely captured by a Kinect camera as input, the algorithm is capable of automatically producing a plausible 3D scene within seconds. For the latter application, models in the database are treated as data samples in the plausible deformation space and the morphing problem is cast as a global optimisation problem of finding a minimal distance path within the local shape spaces connecting these models. By exploiting the knowledge of plausible models, the algorithm produces realistic morphing for challenging cases.

DATE2014-11-17
TIME16:10:00
PLACERoom: EL-0.01
Edward Llwyd Building


TITLE

Medical Image Segmentation using Combinatorial Optimisation

SPEAKERDr Xianghua Xie Personal Page : Visual Computing Group : Department of Computer Science, Swansea University :
PROFILEDr. Xianghua Xie is an Associate Professor in the Visual Computing Group at the Department of Computer Science, Swansea University. He held an RCUK Academic Fellowship between September 2009 and March 2012, and he was a Senior Lecturer between October 2012 and March 2013. Prior to his position at Swansea, he was a Research Associate in the Computer Vision Group, Department of Computer Science, University of Bristol, where he obtained his PhD in Computer Science and MSc in Advanced Computing (with commendation) in 2006 and 2002, respectively.
ABSTRACTCardiovascular disease is one of the leading causes of the mortality in the western world. Many imaging modalities have been used to diagnose cardiovascular diseases. However, each has different forms of noise and artefacts that make the medical image analysis field important and challenging. This work is concerned with developing fully automatic segmentation methods for cross-sectional coronary arterial imaging in particular, intra-vascular ultrasound and optical coherence tomography, by incorporating prior and tracking information without any user interven- tion, to effectively overcome various image artefacts and occlusions. Combinatorial optimisation methods are proposed to solve the segmentation problem in polynomial time. A node-weighted directed graph is constructed so that the vessel border delineation is considered as computing a minimum closed set. A set of complementary edge and texture features is extracted. Single and double interface segmentation methods are introduced. Novel optimisation of the boundary energy function is proposed based on a supervised classification method. Shape prior model is incorporated into the segmentation framework based on global and local information through the energy function design and graph construction. A combination of cross-sectional segmen- tation and longitudinal tracking is proposed using the Kalman filter and the hidden Markov model. The border is parameterised using the radial basis functions. The Kalman filter is used to adapt the inter-frame constraints between every two consecutive frames to obtain coherent temporal segmentation. An HMM-based border tracking method is also proposed in which the emission probability is derived from both the classification-based cost function and the shape prior model. The optimal sequence of the hidden states is computed using the Viterbi algorithm. Both qualitative and quantitative results on thousands of images show superior performance of the proposed methods compared to a number of state-of-the-art segmentation methods.
DATE2014-11-03
TIME16:10:00
PLACEHugh Owen - Lecture Theatre D5


TITLE

Fun with Types

SPEAKERConor McBride Personal Page : Computer and Information Science - Strathclyde University :
ABSTRACTType systems and their role in programming languages remain a matter of heated debate in programming communities. With the help of a few examples, I'll explore how the usage of types has evolved and speculate about future directions. As mainstream languages adopt functional features and functional languages try to connect with the real world, how can types help us understand what is going on?
DATE2014-06-19
TIME15:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

Morphogenetic Self-Organisation of Swarm Robots for Adaptive Pattern Formation

SPEAKERProf. Yaochu Jin, University of Surrey Personal Page : Department Page :
PROFILE

Yaochu Jin received the B.Sc., M.Sc., and Ph.D. degrees from Zhejiang University, Hangzhou, China, in 1988, 1991, and 1996 respectively, and the Dr. Ing. degree from Ruhr-University Bochum, Bochum, Germany, in 2001.

He is a Professor of Computational Intelligence, Department of Computing, University of Surrey, Guildford, U.K., where he heads the Nature Inspired Computing and Engineering Group. His science-driven research interests lie in interdisciplinary areas that bridge the gap between computational intelligence, computational neuroscience, and computational systems biology. He is also particularly interested in nature-inspired, real-world driven problem-solving.

Dr Jin has (co)edited five books and three conference proceedings, authored a monograph, and (co)authored over 150 peer-reviewed journal and conference papers. He has been granted eight US, EU and Japan patents. His current research is funded by EC FP7, UK EPSRC and industries, including Santander, Bosch UK, HR Wallingford and Honda. He has delivered 16 invited keynote speeches at international conferences.

He is an Associate Editor of IEEE TRANSACTIONS ON CYBERNETICS, IEEE TRANSACTIONS ON NANOBIOSCIENCE, and IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, Evolutionary Computation, BioSystems, the International Journal of Fuzzy Systems and Soft Computing.

Dr Jin is an IEEE Distinguished Lecturer and Vice President for Technical Activities of the IEEE Computational Intelligence Society. He was the recipient of the Best Paper Award of the 2010 IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology. He is a Fellow of BCS and Senior Member of IEEE.

ABSTRACT

Self-organization is one of the most important features observed in social, economic, ecological and biological systems. Distributed self-organizing systems are able to generate emergent global behaviors through local interactions among individuals without a centralized control. Such systems are also supposed to be robust, self-repairable and highly adaptive. However, design of self-organizing systems is very challenging, particularly when the emerged global behaviors are required to be predictable.

Morphogenesis is the biological process in which a fertilized cell proliferates, producing a large number of cells that interact with each other to generate the body plan of an organism. Biological morphogenesis, governed by gene regulatory networks through cellular and molecular interactions, can be seen as a self-organizing process. This talk presents a methodology that uses genetic and cellular mechanisms inspired from biological morphogenesis to self-organize swarm robots for adaptive pattern formation in changing environments. We show that the morphogenetic approach is able to self-organize swarm robots without a centralized control, which is nevertheless able to generate controlled global behaviours.

DATE2014-05-19
TIME16:10:00
PLACEPhysical Sciences - Theatre B


TITLE

Intelligent Wheelchair for Ambient Assisted Living

SPEAKERProfessor Dongbing Gu University of Essex
PROFILE

Dongbing Gu is a professor in School of Computer Science and Electronic Engineering, University of Essex, UK. His current research interests include distributed control algorithms, distributed information fusion, cooperative control, model predictive control, and machine learning. He has published more than 140 papers in international conferences and journals.

His research has been supported by Royal Society, EPSRC, EU FP7, British Council, and industries.

He is a board member of International Journals of Model, Identification and Control, Cognitive Computation.

He served as a member of organizing committee and programme committee for many international conferences.

Prof. Gu now is a senior member of IEEE, member of technical committee of the IEEE Safety, Security and Rescue Robotics and member of Robotics Task Force of the Intelligent Systems Applications Technical Committee (ISATC) in the IEEE Computational Intelligence Society (IEEE/CIS).

ABSTRACT

We will give an overview of the development of an intelligent wheelchair with potential applications in ambient assisted living of healthcare sector. The intelligent wheelchair is able to provide the mobility for elderly or disabled people to improve their living independence and life quality.

A range of techniques has been applied to the intelligent wheelchair, ranging from robotic platform, autonomous navigation, and human machine interaction. The talk will start with a brief introduction to the hardware platform and software platform of the wheelchair. Then our recent results of autonomous navigation, including SLAM, obstacle avoidance, and semantic navigation, will be presented. The human machine interaction adopts a multi-mode architecture for the communication between user and the intelligent wheelchair. Sensory data fusion and machine learning techniques are the main focus of the human machine interaction. Some video clips will be shown for various tasks tested in our lab.

DATE2014-05-12
TIME16:10:00
PLACEPhysical Sciences - Theatre B


TITLE

Fracture of Sea Ice in the Arctic and Antarctic

SPEAKERProf. John P. Dempsey Clarkson University, Potsdam, New York, USA
ABSTRACT

Over the last half-century, research topics in sea ice mechanics and sea ice dynamics have included the large-scale response of sea ice to its environment, coupled atmosphere-ocean-ice climate models, as well as the passage of ships and the design of offshore structures in ice-covered seas.

The coupled influence of spatial and temporal scales underlies all sea ice mechanics and sea ice dynamics explanations, based on the recognition of subsets of processes and their interaction with adjacent scales.

Recognition of the influence of scale effects on the fracture of ice has not evolved smoothly or rapidly.

Associated investigations in the field and sub-size testing in the lab have offered unique challenges, both technical and human. This presentation will focus on these challenges, and summarize insights gained to date.

DATE2014-04-17
TIME15:10:00
PLACEPhysical Sciences - Theatre B


TITLE

How Crossover Speeds Up Building-Block Assembly in Genetic Algorithms

SPEAKERDirk Sudholt Home Page :
PROFILEDr Dirk Subholt is a Lecturer at the University of Sheffield in the Department of Computer Science. Before coming to Sheffield, he obtained his Diplom and my Ph.D. from the Technische Universität Dortmund under the supervision of Prof. Ingo Wegener. He has held postdoc positions at the International Computer Science Institute (ICSI) in Berkeley, California, in the group of Prof. Richard M. Karp as well as the University of Birmingham, working with Prof. Xin Yao in the SEBASE project. He is interested in randomized algorithms, algorithmic analysis, and combinatorial optimization. His main expertise is the analysis of randomized search heuristics such as evolutionary algorithms, hybridizations with local search, and ant colony optimization.
ABSTRACTEvolutionary algorithms use search operators like mutation, crossover and selection to "evolve" good solutions for optimisation problems. In the past decades there has been a long and controversial debate about when and why the crossover operator is useful. The building-block hypothesis assumes that crossover is particularly helpful if it can recombine good "building blocks", i. e. short parts of the genome that lead to high fitness. However, all attempts at proving this rigorously have been inconclusive. As of today, there is no rigorous and intuitive explanation for the usefulness of crossover. In this talk we provide such an explanation. For functions where "building blocks" need to be assembled, we prove rigorously that many evolutionary algorithms with crossover are twice as fast as the fastest evolutionary algorithm using only mutation. The reason is that crossover effectively turns fitness-neutral mutations into improvements by combining the right building blocks at a later stage. This also leads to surprising conclusions about the optimal mutation rate.
DATE2014-03-31
TIME16:10:00
PLACEPhysical Sciences - Theatre B


TITLE

Computing with Cells

SPEAKERDr. Dominique Chu Home Page :
PROFILE

Born in Austria. Studied theoretical physics at the universeity of Vienna (Austria). Then went on to Norway to take my doctorate in physics on the topic of complexity. Later on I was a postdoc the School of Computer Science in the Univesity of Birmingham, before I became a lecturer at the University of Kent in 2005.

My main interest is biocomplexity and molecular computing. I addition to my scientific work I have published a textbook "Introduction to Modelling for Biosciences" (jointly with David Barnes) and most recently a popular science book "The Science Myth: God, society, the self and what we will never know".

ABSTRACT

Computational biology studies biological systems using computational methods. This field of study is now very well established. Much less common is the study of cells and organisms as computational system, this despite the fact that biological systems are very powerful information processors whose properties we are now only beginning to understand.

I will start this talk by giving some examples of previous work in the field where most theoretical physicists tried to understand the thermodynamical limits of "Brownian" computers. Following this I will then talk about my own work on the computational speed of biological information processors. I will conclude this talk with some glimpses on ongoing research work.

DATE2014-03-03
TIME16:10:00
PLACEPhysical Sciences - Theatre B


TITLE

When Moore met Jevons

SPEAKERProfessor Colin Pattinson Homepage :
PROFILE

Colin Pattinson is Professor and Head of the School of Computing, Creative Technologies and Engineering at Leeds Metropolitan University. He has a PhD from the University of Leeds which measured the performance of computer communications protocols, and has continued that research with work on the behaviour of network management protocols and platforms. Performance measurement and management form a key element in the quest for sustainability (particularly through the search for efficiency), giving rise to his current research work.

He has been a committee member of the BCS Green IT Specialist Group from its foundation in 2009, and is also a board member of Leeds Met’s “Leeds Sustainability Institute”. He is currently involved in projects to develop Green IT research capability in Russia and Ukraine, and in a pan-EU MSc programme. In addition, he developed one of the first MSc awards in the subject.

ABSTRACT

The increased capability and performance of electronic devices which has been accurately predicted by Moore’s law since 1965, could conceivably have led to the same work being undertaken by fewer, more powerful devices. Instead, we have seen an example of Jevon’s Paradox, in which greater efficiency has increased the demand for resources, with new technologies and applications of those technologies being developed to meet an ever-growing range of uses.

The resultant proliferation of “IT” in its widest sense means that the IT industry is now seen as a significant contributor to the environmental changes generally referred to as “climate change”. This presentation will address the implications of this, and also consider some of the ways in which the impact of IT resources can be managed or reduced.

However, IT can also allow individuals and organisations to behave in ways which reduce their energy consumption, and the benefits of “greening by IT” are apparent in a number of situations.

In this talk, I will introduce some of the research projects undertaken at Leeds Met addressing both greening of IT and greening by IT.

DATE2014-02-17
TIME16:10:00
PLACEPhysical Sciences - Theatre B


TITLE

Some cybersecurity issues of Smart Buildings, Smart Metering, not-so-Smart Cars and the Smart Grid

SPEAKERProfessor Martyn Thomas CBE FREng Home Page : Slides :
PROFILEMartyn Thomas CBE FREng is Vice President of the Royal Academy of Engineering and Chair of the IT Policy Panel of the IET. He has been a visiting Professor at the University of Wales, Aberystwyth and at Oxford and Bristol Universities and a director of the Serious Organised Crime Agency. In the distant past he was a partner in Deloitte. Even longer ago, he founded a software engineering company called Praxis. He wishes that software developers would behave as if they understood the limitations of dynamic testing.
ABSTRACT

The march of automation into new application areas shows no signs of slowing. Every new application creates new vulnerabilities and the desire for greater automation is rarely accompanied by an equal desire to research, learn and apply the methods that have made existing systems adequately safe, secure and reliable (or to learn the lessons from those that are not).

I shall illustrate this general problem with some current examples from Smart Buildings, Smart Meters, and cars with a mind of their own.

DATE2014-01-16
TIME16:10:00
PLACEPhysical Sciences - Theatre B


TITLE

The Security of Machine Learning and Fuzzy Systems

SPEAKERProfessor Philippe De Wilde Home Page :
PROFILE

Professor Philippe De Wilde is Head of the School of Mathematical and Computer Sciences, Heriot-Watt University, with campuses in Edinburgh, Dubai and Malaysia. Professor De Wilde obtained the PhD degree in mathematical physics and the MSc degree in computer science in 1985. He was Lecturer and Senior Lecturer in the Department of Electrical Engineering, Imperial College London, between 1989 and 2005. He is a Professor in the Intelligent Systems Lab of the Department of Computer Science, Heriot-Watt University.

Associate Editor, IEEE Transactions on Cybernetics. Associate Editor, SpringerPlus. Laureate, Royal Academy of Sciences, Letters and Fine Arts of Belgium, 1988. Research Fellow, British Telecom, 1994. Vloeberghs Chair, Free University Brussels, 2010. He has published 49 journal papers and 52 conference papers and book chapters. He has published four books, including ``Neural Network Models’’, Springer 1997, and ``Convergence and Knowledge-processing in Multi-agent Systems’’, Springer 2009.

He works in computational intelligence and cybernetics, using neural networks, fuzzy logic, evolution, and game theory. Research interests: security of machine learning, obfuscation in cyber security, decision making under uncertainty, the Bayesian brain, games on networks, business games, algorithmic trading, networked populations, phylogenetic trees. Professor De Wilde is a Fellow of the British Computer Society, a Fellow of the Institute of Mathematics and its Applications, a Senior Member of IEEE, a member of the IEEE Computational Intelligence Society and of the IEEE Systems, Man and Cybernetics Society.

ABSTRACTMachine learning and data mining are crucial to Google, Facebook, and many other IT applications in social networking and security. Machine learning itself, however, is subject to attacks. It is possible to introduce erroneous classifications. In this talk, I will introduce a classification of attacks on machine learning proposed by Barreno et al. in 2010. The talk will then cover defences against attacks, both non-fuzzy and fuzzy defences.
DATE2013-12-10
TIME17:10:00
PLACEPhysical Sciences - Theatre B


TITLE

From Page to Screen - Digitising a Million Pages of Historic Newspapers

SPEAKERIlltud Daniel, Head of ICT, National Library of Wales Home Page : Welsh Newspapers Online (Beta) :
PROFILEIlltud Daniel graduated with a BSc in Computer Science from University of Wales College of Cardiff in 1994 and moved to Oxford to become Computing Officer for Jesus and Magdalen Colleges. He returned to Wales in 1999 to join the Computer Section at the National Library of Wales and now leads the ICT Section of around 25 technicians, systems administrators and developers delivering internal and public services for the Library.
ABSTRACTThe National Library has had a vision of digitising and providing free access to its rich and varied collections. One of the latest and largest of these is the historic newspaper collections. I will describe the technologies that we used and the challenges that we faced in bringing the content from page to screen.
DATE2013-12-02
TIME16:10:00
PLACEPhysical Sciences - Theatre B


TITLE

Human Robot Long-Term Social Interaction: Robots forget too!

SPEAKERDr. John Murray Home Page :
PROFILEDr. John Murray is a Senior Lecturer at the University of Lincoln. He joined the University of Lincoln in 2009 from the University of Hertfordshire where he worked as Research Fellow on the EU Funded FEELIX-Growing project, working with Robotics and Emotional interactions. Dr. Murray’s research interests include all things Robotics, developing robotic systems for numerous applications including Aerial Surveillance, Human Robot Interaction, Animal Behaviour studies, Ecology, etc.
ABSTRACTThis talk will address the notion of an 'imperfect' robot developed for long-term social interaction and companionship in two separate scenarios; i) aiding communication processes for children with autistic tendencies, and ii) long-term companion robots for elderly and disabled. It has long been held that robots and robotic systems should be developed to be as 'perfect' as possible, working flawlessly (not that they ever do of course), remembering everything they've experienced, etc, especially when interacting with humans. The mechanisms by which researchers develop these interactive robotic systems include aspects such as Artificial Neural Networks and inspiration from Psychology, Infant studies, etc. However, these are all based on human cognition and development, and take their inspiration from biological systems, which clearly are not perfect. Begging the question: if our template isn't 'perfect' how can our modelled system be? In this talk we present two aspects, the first is the emotive model of interaction developed as part of the FEELIX-GROWING FP7 project, identifying the redundancy of particular aspects of facial expression in communication. Second we present 'work in progress' for the development of a forgetful robot, developed from the onset to be imperfect and we identify what impact this has on long term companion and socially interactive robotic systems.
DATE2013-11-18
TIME16:10:00
PLACEPhysical Sciences - Theatre B


TITLE

Why Silicon Valley - Why not here?

SPEAKERJohn Gilbey
PROFILEJohn Gilbey is a science - and science fiction - writer whose work has appeared in Nature, Nature Physics, New Scientist, The Guardian, The International Herald-Tribune and other journals - including slightly unusual ones such as the Journal of Unlikely Science and the Fairbanks Daily News-Miner. A Fellow of the British Computer Society, he teaches in the Department of Computer Science at Aberystwyth University and works for Software Alliance Wales.
ABSTRACT

Silicon Valley, an area south of San Francisco, California, USA, holds almost mythical status among the computing community worldwide.

Based on his visits to the area - and his interviews with folk in the IT, Higher Education and Research industries of the Valley - John Gilbey discusses (with help from some never-before-seen images) the factors that resulted in the rise of Silicon Valley, how it works day to day, how it set out to address the global recession and how it is faring against global competition. What aspects of the Silicon Valley vision and success could be replicated elsewhere - and how?

DATE2013-11-04
TIME16:10:00
PLACEPhysical Sciences - Theatre B


TITLE

Artificial Immune Systems as an Alternative Paradigm in Bio-Inspired Optimisation

SPEAKERDr. Christine Zarges Home Page :
PROFILEChristine Zarges is a Birmingham Fellow and Lecturer in the School of Computer Science at the University of Birmingham, UK, since 2012. Before that she spent a year as a visiting postdoc at the University of Warwick, UK, supported by a scholarship of the German Academic Research Service (DAAD). She studied Computer Science at the TU Dortmund, Germany, and obtained her Diploma (2007) and PhD (2011, Theoretical Foundations of Artificial Immune Systems) there. Her dissertation was awarded the dissertation award of the TU Dortmund. In 2010 she received a Google Anita Borg Memorial Scholarship. Her research focuses on the theoretical analysis of randomised search heuristics, in particular artificial immune systems and evolutionary algorithms. She is also interested in computational and theoretical aspects of immunology. Two of her papers on artificial immune systems were awarded a best paper award at leading conferences (PPSN 2008 and ICARIS 2011). She is member of the editorial board of Evolutionary Computation (MIT Press) and was the instructor of a tutorial on “Artificial Immune Systems for Optimisation” at GECCO 2012 and 2013. She is co-chairing the novel Artificial Immune Systems track at the Genetic and Evolutionary Computation Conference (GECCO 2014).
ABSTRACTArtificial immune systems (AIS) are a relatively new and emerging interdisciplinary area of research, which comprises two main branches: on one hand immune modelling, which aims at understanding the natural immune system by means of mathematics and computer science; on the other hand problem solving by immune-inspired methods, capturing certain properties of the natural immune system such as self-organisation, learning, classification and adaptation capabilities, diversity, robustness and scalability. Such methods have achieved numerous promising results in different areas of application, e.g., learning, classification, anomaly detection, and optimisation and constitute an interesting alternative approach to other bio-inspired methods such as evolutionary algorithms. After a short overview over different approaches in the field of AIS, this talk will focus on recent insights into the working principles of AIS in the context of discrete optimisation. We will consider common operators used in existing AIS such as hypermutation and ageing and discuss results obtained through rigorous runtime analysis. Rather than presenting the full proofs, we will focus on the insights gained and the implications such results can have on the way AIS are applied.
DATE2013-10-21
TIME16:10:00
PLACEPhysical Sciences - Theatre B


TITLE

Learning from Big Random Weight Networks and Their Training Algorithms

SPEAKERProfessor Xi-Zhao Wang Personal Page :
PROFILE

Xi-Zhao WANG is presently the Dean and Professor of the College of Mathematics and Computer Science, Hebei University, China. He received his Ph.D. degree in Computer Science from Harbin Institute of Technology, Harbin, China, in 1998. From September 1998 to September 2001, he served as a Research Fellow in the Department of Computing, Hong Kong Polytechnic University, Hong Kong. He became Full Professor and Dean of the College of Mathematics and Computer Science in Hebei University in October 2001. His main research interests include learning from examples with fuzzy representation, fuzzy measures and integrals, neuro-fuzzy systems and genetic algorithms, feature extraction, multi-classifier fusion, and applications of machine learning. He has 160+ publications including 6 books, 7 book chapters, and 100+ journal papers in IEEE Transactions on PAMI/SMC/FS, Fuzzy Sets and Systems, Pattern Recognition, etc. He has been the PI/Co-PI for more than 20 research projects supported partially by the National Natural Science Foundation of China and the Research Grant Committee of Hong Kong Government.

Prof. Wang is an IEEE Fellow. He is the IEEESMC Board of Governor member in 2005, 2007-2009, 2012-2014; the Chair of IEEE SMC Technical Committee on Computational Intelligence, an Associate Editor of IEEE Transactions on SMC, Part B; an Associate Editor of Pattern Recognition and Artificial Intelligence; an Associate Editor of Information Sciences; an executive member of Chinese Association of Artificial Intelligence; and an executive member of Chinese Industrial and Applied Mathematics. He is the Editor-in-Chief of International Journal of Machine Learning and Cybernetics. Prof. Wang was the recipient of the IEEE-SMCS Outstanding Contribution Award in 2004 and the recipient of IEEE-SMCS Best Associate Editor Award in 2006. Also he is the recipient of 2008 IEEE Outstanding SMCs Chapter Award id and 2009 Most Active SMC Technical Committee Award. He is the General Co-Chair of the 2002-2012 International Conferences on Machine Learning and Cybernetics, co-sponsored by IEEE SMCS. 

Prof. Xi-Zhao Wang is a distinguished lecturer of IEEE SMC Society.

ABSTRACT

Learning from big data currently has become a hot research area as it brings many new challenges and opportunities which can identify potential business chances or discover new scientific results. Developing effective learning algorithms efficiency is essential to understand and model the data. This talk presents a new approach to learning from big data, i.e., the random weight networks, and their automatic selection of architectures based on our proposed Locally Generalized Error Model (LGEM). An error upper bound including the training error and sensitivity is given for the networks and a model of incremental learning is built for handling big data. Experimentally the incremental model shows its effectiveness and some additional potentials.

DATE2013-10-16
TIME16:10:00
PLACEPhysical Sciences - Main Theatre


TITLE

Learning Effective Human Pose Estimation from Inaccurate Annotation

SPEAKERSam Johnson, Toshiba R&D
PROFILESam Johnson received the B.Sc. (Hons.) degree in computer science from the University of Leeds in 2008 and is due to receive his Ph.D. from the same university in 2013. His Ph.D. thesis was on the topic of unconstrained human pose estimation - the inference of 2-D human body configuration in images such as consumer photographs - and was under the supervision of Dr. Mark Everingham and Prof. David Hogg. The methods developed during his Ph.D. achieve state-of-the-art accuracy and continue to be used as a benchmark in the area. Since 2012 Sam has held a Research Engineer position at Toshiba Research Europe Ltd., Cambridge where he continues to focus on the application of machine learning and computer vision techniques to high level understanding of humans in images.
ABSTRACTThe task of 2-D articulated human pose estimation in natural images is extremely challenging due to the high level of variation in human appearance. These variations arise from different clothing, anatomy, imaging conditions and the large number of possible poses for a human body to take. We show that building a single Pictorial Structure Model leads to broad constraints on pose and appearance. Our work has involved partitioning the pose space and using strong nonlinear classifiers such that the pose dependent and multi-modal nature of body part appearance can be captured. Within each partition we learn much more informative pose priors leading to state-of-the-art results on highly challenging images. I will also present extensions to this clustered pose estimation approach to handle much larger quantities of training data, and show how to utilize Amazon Mechanical Turk and a latent annotation update scheme to achieve high quality annotations at low cost.
DATE2013-04-29
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

Bio2RDF Release 2: Improved coverage, interoperability and provenance of Linked Data for the Life Sciences

SPEAKERMichel Dumontier, Carleton University, Ottawa, Canada
PROFILEDr. Michel Dumontier is an Associate Professor of Bioinformatics in the Department of Biology, the Institute of Biochemistry and School of Computer Science at Carleton University in Ottawa, Canada. His research aims to develop semantics-powered computational methods to increase our understanding of how living systems respond to chemical agents. At the core of the research program is the development and use of Semantic Web technologies to formally represent and reason about data and services so as (1) to facilitate the publishing, sharing and discovery of scientific knowledge produced by individuals and small collectives, (2) to enable the formulation and evaluation scientific hypotheses using our collective tools and knowledge and (3) to create and make available computational methods to investigate the structure, function and behaviour of living systems. Dr. Dumontier serves as a co-chair for the World Wide Web Consortium Semantic Web in Health Care and Life Sciences Interest Group (W3C HCLSIG) and is the Scientific Director for the open-source Bio2RDF linked data for life sciences project.
ABSTRACTBio2RDF is an open source project that uses Semantic Web technologies to build and provide the largest network of Linked Data for the Life Sciences. Here, I'll describe Bio2RDF release 2 which features 19 updated MIT licensed open-source scripts, consistent URI naming via a API access to a dataset registry, VOID and PROV powered dataset provenance, 10 distinct dataset statistics, public SPARQL endpoints, and compressed RDF files and full text-indexed Virtuoso triple stores for download. Over a billion statements via nineteen updated datasets, including 5 new datasets and 3 aggregate datasets, are now being offered as part of Bio2RDF Release 2. We show how dataset metrics not only provide elementary information such as the number of triples but also reveal a more sophisticated network among types and relations that can be used to assist query formulation and to monitor dataset changes. We demonstrate how multiple open source tools can be used to visualize and explore Bio2RDF data and include the ability to execute federated queries with biomedical ontologies. Over the next year we hope to offer regular releases and make it possible for developers to make use of this increasingly valuable resource on the emerging Semantic Web.
DATE2013-04-26
TIME14:00:00
PLACEHugh Owen Room D5


TITLE

Building Qualitative Models of Spatio-Temporal Behaviour

SPEAKERProfessor Tony Cohn, University of Leeds
PROFILETony Cohn holds a Personal Chair at the University of Leeds, where he is Professor of Automated Reasoning and served a term as Head of the School of Computing, from August 1999 to July 2004. He is presently Director of the Institute for Artificial Intelligence and Biological Systems. He holds BSc and PhD degrees from the University of Essex, where he studied under Pat Hayes. He spent 10 years at the University of Warwick before moving to Leeds in 1990. He now leads a research group working on Knowledge Representation and Reasoning with a particular focus on qualitative spatial/spatio-temporal reasoning, the best known being the well cited Region Connection Calculus (RCC). His current research interests range from theoretical work on spatial calculi and spatial ontologies, to cognitive vision, modeling spatial information in the hippocampus, and integrating utility data recording the location of underground assets. Many of the group’s publications concerning spatial reasoning can be found here. He has received substantial funding from a variety of sources, including EPSRC, the DTI, DARPA, the European Union and various industrial sources. Work from the Cogvis project won the British Computer Society Machine Intelligence prize in 2004.
ABSTRACTIn this talk I will present ongoing work at Leeds on building models of activity from video and other sensors, using both supervised and unsupervised techniques. Activities may occur in parallel, while actors and objects may participate in multiple activities simultaneously. The representation exploits qualitative spatio-temporal relations to provide symbolic models at a relatively high level of abstraction. A novel method for robustly transforming noisy sensor data to qualitative relations will be presented. For supervised learning, I will show how the supervisory burden can be reduced by using what we term "deictic supervision," whilst in the unsupervised case I will present a method for learning the most likely interpretation of the training data. I will also show how objects can be "functionally categorised" according to their spatio-temporal behaviour and how the use of type information can help in the learning process, especially in the presence of noise. I will present results from several domains including a kitchen scenario and an aircraft apron.
DATE2013-04-22
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

From Immune Systems to Robots

SPEAKERJohn Timmis Home Page :
PROFILEJon Timmis is Professor of Natural Computation at the University of York and holds a joint appointment between the Department of Electronics and the Department of Computer Science. His primary research is in the area of computational immunology and bio-inspired fault tolerance in embedded systems, with a focus on swarm robotic systems. He gained his PhD in Computer Science from the University of Wales, Aberystwyth. He holds a senior Royal Society Research fellowship, a Wolfson Research Merit Award, to investigate the development of self-healing swarm robotic systems.
ABSTRACTThere are many areas of bio-inspired computing, where inspiration is taken from a biological system to construct an engineered solution. This talk will focus on the modelling of the immune system using agent-based simulations and the trustworthiness, or otherwise, of such simulations and the use of the immune system as inspiration in a variety of settings from robot mounted chemical identification to self-healing swarm robotic systems. We will explore how the modelling and engineering work can compliment each other, and pass comment on the thrills and pitfalls of interdisciplinary working.
DATE2013-04-08
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre A


TITLE

Integrating Heterogeneous Data using Gaussian Process Function Approximations: Applications in Bioinformatics and Clinical Neurology

SPEAKERMark Girolami, UCL
PROFILEMark Girolami is Professor of Statistics in the Department of Statistical Science at University College London (UCL). He also holds a professorial post in the Department of Computer Science at UCL and is Director of the Centre for Computational Statistics and Machine Learning. Prior to joining UCL Mark held a Chair in Computing and Inferential Science at the University of Glasgow. He is currently Editor-in-Chief of the journal Statistics and Computing, an Associate Editor for J. R. Statist. Soc. C, Journal of Computational and Graphical Statistics and Area Editor for Pattern Recognition Letters. He currently holds a Royal Society Wolfson Research Merit Award and an EPSRC Established Career Research Fellowship.
ABSTRACTThere are many challenging open problems for predictive classification methods that have the enticing opportunity of improving error rates by exploiting diverse sources of data which now become feasible to collect. The rationale is that each different type of data source will bring additional and complementary information about the objects to be discriminated. This raises two questions (1) is there a common methodological framework that can systematically integrate these data representations, and (2) can each source of data be appropriately weighted taking into account the task at hand. These issues are particularly important when there is a cost associated with gathering the different forms of data and this is particularly important in clinical applications. This talk will outline such a methodology based on ideas from Machine Learning furthermore a number of applications such as protein fold recognition, remote homology detection, and diagnostic tests in clinical neurology will be discussed assessing strengths and weaknesses of the approach.
DATE2013-03-18
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

Image Analysis Methods for Plant Root Phenotyping at Multiple Scales

SPEAKERTony Pridmore, University of Nottingham Home Page :
PROFILETony Pridmore is Reader in Computer Science at the School of Computer Science, University of Nottingham. His research interests centre on image analysis and computer vision, particularly motion analysis and tracking and their application in bioimage and video analysis. He is a member of the Interdisciplinary Computing and Complex Systems Research Group (ICOS) and the Centre for Plant Integrative Biology (CPIB).
ABSTRACTInterest in automatic image analysis has increased significantly within the plant sciences in recent years. This is due to the emergence of the systems approach to biological research and an increasing awareness that quantitative measurement of the phenotype has fallen behind understanding of the genotype. Work within the Nottingham Centre for Plant Integrative Biology has focused on the development of methods and tools for the recovery of quantitative data from images of plant roots. A need to provide data at a variety of scales (molecular, cellular, tissue, whole organ) has led to the use of several imaging modalities. This talk will describe recent work on the recovery of tissue scale structure and growth from confocal laser scanning microscopy and the 3D root system architectures of plants grown in soil from X-ray micro-computed tomography.
DATE2013-03-04
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

"Joint Computer Science and IMAPS Seminar"

The Antikythera Mechanism and the Early History of Mechanical Computing

SPEAKERProf Mike Edmunds Home Page :
PROFILEMike Edmunds is Emeritus Professor of Astrophysics at Cardiff University and former Head of the School of Physics and Astronomy. He was educated at Cambridge, but has lived and worked in Wales for over 35 years. His main research career involved the determination and interpretation of the abundances of the chemical elements in the Universe, and investigation of the origin of interstellar dust. Later work has partly focused on the history of astronomy, and on Science in Society activity. Mike is Chair the Antikythera Mechanism Research Project, Chair of the Astronomical Heritage Committee of the Royal Astronomical Society, a former member of two UK Research Councils (the Particle Physics and Astronomy Research Council and the Science and Technology Facilities Council), and can occasionally be seen in his one-man play about Newton "Sir Isaac Remembers...".
ABSTRACTDoing arithmetic has probably been necessary since civilisation began. We now know that the ancient Greeks were able to make mechanical devices capable of calculation. The Antikythera Mechanism is an extraordinary device containing over thirty gear wheels dating from the 1st century B.C., and is an order of magnitude more complicated than any surviving mechanism from the following millennium. It is clear from its structure and inscriptions that its purpose was astronomical, including eclipse prediction. In this illustrated talk, I will show the results from our international research team, which has used modern imaging methods to probe its functions and details. The Mechanism's design is very sophisticated. I will outline how its technology may have almost disappeared from sight for over a thousand years and then been extended to more general mechanical clocks, calculators and computers from around 1200 A.D. through to the 19th Century.
DATE2013-02-25
TIME16:10:00
PLACEHugh Owen Lecture Theatre A12


TITLE

Securing the Critical National Infrastructure (CNI) from Cyber Attacks

SPEAKERDr Kevin Jones EADS :
ABSTRACTSecuring the Critical National Infrastructure (CNI) from Cyber Attacks is the focus of significant global research amongst a background of increased attack vectors and growing interest from governments worldwide. This talk provides an introduction to the Supervisory Control And Data Acquisition (SCADA) systems that form the basis for CNI, and discuss the background and requirements for current research within the area of CNI Cyber Security. The aim is to foster discussion and opportunities through; the use of real-world examples, an overview of the SCADA cyber security problem space, and current research directions including ongoing activities within EADS Innovation Works UK.
DATE2013-02-06
TIME13:30:00
PLACEPhysical Sciences Main Lecture Theatre


TITLE

The Theory of Darwinian Neurodynamics

SPEAKERChrisantha Fernando
(Lecturer in Cognitive Science)
EECS, Queen Mary University of London
Home Page :
PROFILEDr. Chrisantha Fernando originally studied medicine at Oxford and practiced at the John Radcliffe Hospital but was always reading about robots and so decided to do a MSc in Evolutionary and Adaptive Systems at Sussex, and then a PhD in computer models of the origin of life. After a Marie Curie Fellowship at the Institute for Advanced Study in Budapest with Prof. Eors Szathmary he started working on the major transitions in evolution. In 2008 he co-invented the Theory of Darwinian Neurodynamics with Prof. Szathmary which proposes that there are neuronal replicators in the brain and that they are responsible for open-ended cognitive adaptation.

In March 2013 a FET OPEN FP-7 project called INSIGHT begins to prove (or disprove) the theory. Also he is currently carrying out a project funded by the Templeton Foundation called "Bayes and Darwin" which uncovers the deep links between Bayesian inference and Darwinian natural selection. His talk will be about his recent work trying to implement Darwinian neurodynamic algorithms in robots.

ABSTRACTWe propose that there are informational replicators in the brain, akin to DNA but quite different in implementation. Response characteristics, e.g. orientation selectivity in visual cortex, are known to be copied from neuron to neuron. Through STDP, small neuronal circuits can undertake causal inference on other circuits to reconstruct the topology of those circuits based on observations of their spontaneous activity. Patterns of synaptic connectivity are replicating units of evolution in the brain. How does this map to the cognitive architecture level? The space of predictions is unlimited; brains do sparse search in this model space. We've shown that Darwinian dynamics is efficient for sparse search compared to algorithms that lack information transfer between adaptive units, in realistic adaptive landscapes. Thus, we propose that Darwinian dynamics in the brain implements approximate Bayesian inference, is capable of search in the space of physical symbol systems, search for syntactic rules, and search over structured representations in human insight problems.
DATE2013-02-04
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

The first 10 years in industry, selling software, and the emerging Big Data market

SPEAKERAdam Fowler Blog : MarkLogic :
PROFILE

Adam Fowler is a Senior Pre-Sales Engineer with MarkLogic Corporation. In the last ten years since graduating with a degree in Computer Science from Aberystwyth he has worked as a developer, dev team co-ordinator, and pre-sales engineer for a variety of small and large companies. These include Universities like Aberystwyth and Derby, and companies like FileNet, IBM and edge IPK. This has included working in Financial Services, Insurance, and the Public Sector, working with large partner SIs, and selling software across Europe, North America and South Africa.

ABSTRACT

Adam will start with a history of his first 10 years in industry, the highs and lows, covering small companies and large multi nationals. Then he will cover the Pre-Sales role and how to sell software generally, relating this to the types of activity a pre-sales engineer needs to carry out - and importantly why a pre-sales engineer is different to sales person. These sections include a few funny stories, including what not to ask a hotel concierge for! After this he will cover the software hype cycle - how software emerges in a new market, becomes popular, and then commoditised. He will finish by talking about Big Data, what it actually means, and the software that is currently trying to solve these issues - including Hadoop, NoSQL software and how OpenSource relates to Enterprise software vendors like MarkLogic. He will also briefly detail an academic challenge that MarkLogic is launching, that if you enter a project for could help pay toward your studies, and have you present your project to our customers. Hopefully this will give Computer Science students an idea of what trends will be waiting for them in industry, and open them up to Pre-Sales as a career option.

DATE2012-12-03
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

Autonomous boat control: working with a naval architect

SPEAKERDr Paul Miller, US Naval Academy
PROFILEPaul is an Associate Professor of Naval Architecture at the United States Naval Academy. He received his doctorate at the University of California at Berkeley, where he also spent far too much time sailing. He has helped design over 70 vessels, only two of which have sunk, and has worked in autonomous vessels for the last six years.
ABSTRACTControlling an autonomous surface vessel is a challenge, not only is the ocean an ever-changing and moving surface, naval architects often try to design vessels to make them difficult to control. This seminar will present many of the challenges in autonomous vessel control and will give tips so that those designing the control system and those designing the hull can live in harmony.
DATE2012-11-26
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

Man, mouse and meaning; Semantic approaches to the exploitation of multi-species phenotype data

SPEAKERDr. Paul Schofield Cambridge (Department) : Home Page :
ABSTRACT

The collection of huge volumes of complex and deep, formally annotated, phenotype data from both systematic mutagenesis and hypothesis-driven studies using model organisms such as mouse and zebrafish, is being increasingly complemented by the formalisation of clinical phenotype annotation using the recently developed Human Phenotype ontology. The problems of comparing the similarity of phenotypes within and between species, especially for the effects of specific mutations, has been largely solved with a new approach to phenotype decomposition using species-agnostic ontologies, such as the Gene Ontology, and the Phenotype and trait ontology, which permit the relationship between phenotypes and diseases to be quantified and used for computational analysis. The application of this new approach will be illustrated with reference to the analysis of the human Mendelian overgrowth disorders and to the determination of the contribution to pathogenicity of genes contained within regions of human copy number (CNV) variation.

DATE2012-11-19
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

Relating Theory and Practice in Laboratory Work: A Variation Theoretical Study

SPEAKERAnna Eckerdal Home Page :
PROFILEAnna Eckerdal is a Lecturer at the Department of Information Technology, Uppsala University, where she teaches programming courses. Anna holds a M.S. in Science Education and a Ph.D. in Computer Science with specialization in Computer Science Education (Uppsala University). Her research interests include how novice students learn to program, Threshold Concepts in Computer Science, and Self-directed learning related to Computer Science. Currently Anna has a 3 years research grant from the Swedish Research Council.
ABSTRACTComputer programming education has practice oriented as well as theory oriented learning goals. Here, lab work plays an important role in supporting students' learning. It is however widely reported that many students face great difficulties in learning the theory as well as the practice, despite great efforts during many decades to improve programming education. This paper investigates the important but problematic relation between learning of theory and learning of practice for novice computer programming students. Theory is here discussed in terms of concepts, while practice is discussed as common programming activities students learn in the lab. Based on two empirical studies it is argued that there exists a mutual and complex dependency between learning of concepts and learning of practice. It is hard to learn one without the other, and either of them can become an obstacle that hinders further learning.
DATE2012-11-12
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

A generalized risk approach to path inference based on hidden Markov models Alexey Koloydenko, Royal Holloway, (Joint work with Jüri Lember, Tartu University, Estonia)

SPEAKERAlexey Koloydenko
ABSTRACT

Motivated by the unceasing interest in hidden Markov models (HMMs), we re-examines hidden path inference in these models, using a risk-based framework. While the most common maximum a posteriori (MAP)/Viterbi path estimator and the minimum error/Posterior Decoder (PD) have long been around, other path estimators, or decoders, have been either only hinted at or applied more recently and in dedicated applications.

Over a decade ago, however, a family of algorithmically defined decoders aiming to hybridize the two standard ones was proposed by Brushe et. al. This and other previously proposed approaches will be shown to have various serious problems, and we will mention some practical resolutions of those.

Furthermore, simple modifications of the classical criteria for hidden path recognition will be shown to lead to a new class of decoders. Dynamic programming algorithms to compute these decoders in the usual forward-backward manner will be presented.

A particularly interesting subclass of such estimators can be also viewed as hybrids of the MAP and PD estimators. Similar to previously proposed MAP-PD hybrids, the new class is parameterized by a small number of tunable parameters. Unlike their algorithmic predecessors, the new risk-based decoders are more clearly interpretable, and, most importantly, work ``out-of-the box'' in practice, which is demonstrated on some real bioinformatics tasks and data. Some further generalizations and applications will be discussed in conclusion.

DATE2012-10-29
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

The Dendritic Cell Algorithm: Review and Evolution

SPEAKERDr Julie Greensmith University of Nottingham :
PROFILEDr Julie Greensmith did her undergraduate degree in Pharmacology with lots of computing courses on the side, and then a masters in Bioinformatics at Leeds. She spent some time as an intern in some fancy company (HP Labs), before taking up a PhD position at the University of Nottingham. This was followed by postdoctoral research and now a lectureship. In her spare time she enjoys lion taming (despite a cat allergy), brass banding, had an album peak at number 10634 on itunes and is proud to have recently obtained Sabatier sponsorship for her knife juggling act. Ok, so the bit about knife juggling isn’t true but the rest of it is. "
ABSTRACTThe DCA is the newest of the mainstream artificial immune system algorithms. It is a data fusion and classification algorithm used primarily for two class classification or anomaly detection problems. Its unique property is its combination of classification, filtering, signal processing and correlation functions. It has been applied to a number of real world applications including computer network security, autonomous robotics, embedded systems and to standard machine learning datasets. Its main advantage over other similar algorithms is its lightweight approach to data processing (linear worst case run time) and its ability to process data in near to real time. The DCA is inspired by the dendritic cells of the human immune system. Specifically, the DCA is based on an abstract model of the maturation process of natural DCs and based on Matzinger's 'danger theory'. In the algorithm, a population of artificial DCs are created and each cell is presented with signal data. The algorithm performs correlation between signals and antigen to classify the antigen data into normal or anomalous classes. The major criticisms of the DCA to date have included the fact that if a single DC is used then the system function equates computationally to a filtered linear classifier. Additionally the mapping process between domain and signal/antigen requires a considerable amount of expert knowledge. Thirdly, numerous variants of the DCA now exist, all with slightly different setups, parameter makeups and different data mapping processing. This also includes recent hybrid fuzzy DCA which further confuses the issue. As part of this talk I will examine examples of DCA applications to date, in addition to presenting a clear definition of the most recent deterministic DCA. The evolution of the DCA is presented, from its initial conception as an abstract model through to an applied algorithm. As part of this research I also present conjecture as to the next steps for the development of this unconventional algorithm.
DATE2012-10-15
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

Engineering Challenges in Regenerative Medicine

SPEAKERProfessor Zhanfeng Cui, PhD, DSc Home Page : Institute :
PROFILEProfessor Cui is the Donald Pollock professor of Chemical Engineering at Oxford University. His research lies on the interface areas between Chemical Engineering, life science and membrane technology. He is the Founding Director of the Oxford Centre for Tissue Engineering and Bioprocessing, serves on the Research Councils as both committee and panel member (BBSRC, EPSRC, MRC, CCLRC) for grant reviews, and sits in the Editoral Board of several relevant journals (Journal of Membrane Science, Food and Bioproduct Processing, Patents in Biotechnology, Patents in Engineering, China Particuology, Science (China), Chinese Journal of Antibiotics, Chinese Journal of Biomechanics, etc).
ABSTRACT

Regenerative medicine aims at developing new therapies and treatment for the currently non-curable diseases and conditions, and is a fast growing field in research, development and commercialisation. It mainly follows two inter-related approaches, tissue engineering and stem cell therapy. Regenerative medicine needs multidisciplinary effort involving physical and life scientists, engineers and clinicians.

Engineering plays an important role in the translation of regenerative medicine from laboratory to hospital bedside. In this presentation, examples of critical contribution of engineers will be discussed, including scale up or scale out, bioprocessing, control of stem cell differentiation, quality control etc. A specific example is the prediction of stem cell differentiation, where novel information technologies and computing, such as data mining and classification, can make a significant impact. The outcome can potentially save a lot of experimental effort and hence cost in time and money.

DATE2012-10-01
TIME16:10:00
PLACETo be decided


TITLE

Intelligent Data Analysis: Issues and Opportunities

SPEAKERProfessor Xiaohui Liu Home Page : Brunel University SISCM :
PROFILEXiaohui Liu is Professor of Computing at Brunel University where he directs the Centre for Intelligent Data Analysis, conducting interdisciplinary research involving artificial intelligence, dynamic systems, human-computer interaction, and statistical pattern recognition.
ABSTRACTIntelligent Data Analysis is needed to address the interdisciplinary challenges concerned with the effective analysis of data. In this talk, I will look into some of the key issues as well as opportunities in modern data analysis, in particular, how to ensure that quality data are obtained for analysis, to meet challenges in modelling dynamics, to handle human factors with care, as well as to consider all these when analysing complex systems. Examples in biology, finance, medicine, and security will be drawn from work carried out at Brunel and elsewhere.
DATE2012-05-14
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

"Changing the World with Intelligent Algorithms"

SPEAKERDoug Aberdeen
PROFILEDoug Aberdeen wasn't sure what he really wanted to do until he hit his 5th (!) year of undergrad. Somehow he was accepted for graduate studies and all the other job possibilities seemed boring. But then, after a Ph.D and a few years as a post-doc in machine learning, he was itching to find out what the "real world" was like. The rest of the story is in the talk.
ABSTRACT

Algorithms have already changed the world and are continuing to do so. From the exploits of British cryptographers during WWII, through to the algorithms driving modern search engines, the common theme is the smart application of clever algorithms to help people. I'm going to try and illustrate by example how Google is carrying on this story and how today's students can continue it. To begin, I'll talk a bit about the founding of Google and PageRank. To finish, I'll talk about my personal experiences developing and deploying algorithms for Gmail's spam detection and the Priority Inbox. I'll also talk about how it's one thing to develop an algorithm that works for an individual, but something different to make it work for millions of users.

This talk is aimed at a broad audience from first years through to staff interested in machine learning.

DATE2012-05-04
TIME14:00:00
PLACEC22 Hugh Owen


TITLE

An Evolutionary Simulation Approach to Extravagant Honesty

SPEAKERProfessor Seth Bullock Home Page :
PROFILEProf Seth Bullock is a leading UK complexity science researcher at the University of Southampton. He is a founding member of the Agents, Interaction and Complexity group within the School of Electronics and Computer Science and is Director of the University’s Institute for Complex Systems Simulation (www.icss.soton.ac.uk). His research takes place at the intersection between complexity science, biological modelling, and artificial intelligence. Recent research activities include leading the EPSRC Resilient Futures project which explores the resilience of future infrastrucure to terrorist attack and extreme weather events, and the EPSRC Care Life Cycle project which explores how the supply of and demand for health care and social care will be effected by demographic change. He served as Conference Chair for the 11th International Conference on Artificial Life on its first visit to Europe, has published in journals spanning health, economics, biology, computing, architecture, geosciences and physics, and was the only physical scientist invited to contribute to Richard Dawkins’ OUP festschrift.
ABSTRACT

Given their "selfish" genes, it is remarkable that biological creatures pay any attention to the displays, advertisements, threats, warnings, etc., directed at them, and equally remarkable that so many of these signals are produced in the first place. Moreover, many of these biological signals appear to be needlessly extravagant in terms of the energy spent, time taken, and risks incurred in producing them.

At first sight, for instance, it difficult to understand why peacocks persist in constructing and maintaining tails that are a significant and, to the disinterested observer, irrational drain on resources.
Might the same information not be conveyed through a stable signalling system employing much cheaper signals?

In this talk I will present an evolutionary simulation approach to answering such questions. Here, multiple artificial signalling systems are allwoed to compete with one another over evolutionary time, and where more than one signalling system is viable, the models explan why the more extravagant signalling systems will tend to be favoured by evolution.

DATE2012-04-30
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

Digital Histology - From Microscopes to Computer Analysis and Visualisation

SPEAKERDr Derek Magee School of Computing Leeds University : Personal Web :
PROFILEDerek's research is based on the practical application of model based machine vision in domains such as agriculture, traffic monitoring and medical image analysis. His particular interest is in statistical and logical modelling of the spatial and temporal characteristics of such domains.
ABSTRACTOnce upon a time in a hospital not far from you histopathologists used microscopes to examine pieces of tissue extracted from the human body to diagnose disease. In fact not much has changed (yet!). However, there is a better way that involves digitising stained tissue samples at very high resolution. In addition to facilitating useful things such as digital storage and transmission, this affords the opportunity for computer scientists to get involved and have some fun. We can apply all the image analysis and visualisation techniques that we've developed for digital radiology to this domain. Additionally, it introduces new complications as the images are huge, colour, 2D only, and are often used in conjunction with other imaging modalities (e.g. MRI or other Histopathology images with different chemical stains). This talk will discuss some of the ongoing work in Leeds on Image analysis, 3D histopathology and novel interfaces.
DATE2012-03-26
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

Nudge to Shape Physical Activity Behaviour with Personal Mobile Devices

SPEAKERDr. Parisa Eslambolchilar Home Page : Swansea University - CompSci :
PROFILEDr. Parisa Eslambolchilar is a lecturer in the FIT Lab at Swansea University since March 2007. Her research interests are in the area of dynamic, continuous interaction with small computing appliances, multimodal interaction, human-computer interaction (HCI) with medical devices, and persuasive technologies. She has run two successful international workshops on the subject of Persuasion, Nudge, Influence and Coercion using Ubiquitous Technologies in conjunction with CHI and Mobile HCI conferences. She has been an International Programme Committee member in UbiComp 2011 and Pervasive Health 2012 conferences. She is also chairing Mobile HCI 2012 workshops. She has given many public talks at the SONY Computer Science Research Institute (Paris), SHARP research lab (Oxford), and Knowledge Media Institute (Milton Keynes). She is a co-investigator in, ``Healthy interactive systems'': Resilient, Usable and Appropriate Systems in Healthcare, EPSRC-funded platform grant (ref EP/G003971). Also, she is a Co-Investigator (leading researcher in Swansea) in an EPSRC funded project EP/H006966/1 ``CHARM: Digital technology and interfaces: shaping consumer behaviour by informing conceptions of 'normal' practice”.
ABSTRACTThe aim of this talk is to provide a focal point for research and technology dedicated to persuasion and influence. Patterns of consumption such as drinking and smoking are shaped by the taken-for-granted practices of everyday life. However, these practices are not fixed and `immensely malleable'. Consequently, it is important to understand how the habits of everyday life change and evolve. Our decisions are inevitably influenced by how the choices are presented. Therefore, it is legitimate to deliberately `nudge' people's behaviour in order to improve their lives. Mobile devices can play a significant role in shaping normal practices in three distinct ways: (1) they facilitate the capture of information at the right time and place; (2) they provide non-invasive and cost effective methods for communicating personalised data that compare individual performance with relevant social group performance; and (3) social network sites running on the device facilitate communication of personalised data that relate to the participant's self-defined community. In this talk I will be particularly focusing on persuasive technologies available for shaping physical activity behaviour on mobile platforms including the bActive application developed through the CHARM project.
DATE2012-03-12
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

"From robots that dance to moles that need to be whacked: taking programming off the screen, out of the lab and back into the consciousness of our kids."

SPEAKERChris Martin Home Page : University of Dundee ( Computing ) :
PROFILE

Chris is a researcher in applied computing working in the School of Computing, University of Dundee, Scotland. Chris is involved in research, teaching and outreach in the School of Computing As well as the conventional computing of programming on a variety of platforms he is interested in how we make technology fit the people it's crafted for. Utilising tools such as focus groups, live theatre and ethnographic techniques , he is often not surprised to discover that the richness and complexity of the people we work and design for far outweigh the complexity of the technology we seek to construct.

In particular and the focus of his ongoing PhD research is an interest in computer programming and how programmers support themselves in solving problems. Where a programmer may be a senior analyst in large software development company or a primary school child first discovering that sequence, decision and repetition can make a robot dance. Can grounding the abstract components of a computer program in a physical device improve success in programming...?

ABSTRACTIn this talk I will share my experiences and aspirations of recapturing the imagination of school kids and keeping them hungry for computing once we have them on our courses. Three topics will mix & resonate together. Outreach activities: robot dance, the bug catcher challenge and dance of creative code (work in progress). Level one teaching: physical computing and data visualisation, what can be achieved with two semesters and two enabling technologies. Finally the PhD (ongoing) building the evidence based case for these ideas and experimental methods I employ.
DATE2012-03-05
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

Robot Traders & Flash Crashes: WTF?

SPEAKERProfessor Dave Cliff Home Page : University of Bristol - Faculty of Engineering :
PROFILEDave Cliff is a professor of computer science at the University of Bristol and has previously worked as an academic at the Universities of Sussex and Southampton in the UK, and at MIT in the USA. He has also worked as a research scientist for Hewlett-Packard Labs and as a Director/Trader for Deutsche Bank's Foreign-Exchange Complex Risk Desk. Since 2005 he has served as Director of a £15m national research and training initiative addressing issues in the science and engineering of Large-Scale Complex IT Systems (LSCITS: see www.lscits.org). He is author on approx 100 academic publications, and inventor on 15 patents. In 1996 he iinvented one of the first adaptive autonomous algorithmic trading systems applicable to financial markets, which in 2001 was demonstrated by IBM to outperform human traders. He is currently serving as one of the group of eight experts leading the UK Government's "Foresight" investigation into the future of computer trading in the financial markets, a two-year project run by the Government Office for Science.
ABSTRACTIn the past decade, the global financial markets have become very heavily dependent on automated trading systems where computer systems perform trading jobs that were previously done by humans. Automated trading systems can now perform at truly superhuman levels, integrating vast amounts of data and reacting at split-second speeds that no human trader could ever match. The mix of human traders and automated systems, and the planetary interconnectedness of various major trading exchanges, mean that the global financial markets are now a single ultra-large-scale socio-technical system, built from risky technology. Various events in the past 18 months have served to highlight that the global financial system may now be less resilient, and more vulnerable to sudden severe failures, than it has ever been in the past. In this lecture I will talk about how we got to where we are, what the current problems are, what's likely to happen next, and what might be done to make things bette
DATE2012-02-27
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

Taming Schrödinger's cat

SPEAKERDaniel Burgarth
PROFILEDaniel Burgarth has recently joined IMAPS as a lecturer. Previously, he held an EPSRC Fellowship in Theoretical Physics at Imperial College, London. His research interests include the dynamics of quantum many-body systems, control theory and quantum information.
ABSTRACTThe last decades have seen a paradigm shift in our view of quantum theory. While formerly the wave-like nature of quantum mechanics was mostly considered a blurry, noise-like phenomena, it is now known that it can be a powerful resource for computations. Roughly speaking, nature is at its most fundamental level uncertain. When a quantum computer is challenged with a fundamentally uncertain input, it provides in some sense all possible answers simultaneously, which gives rise to its extraordinary power. To use this power we have to learn how to tame Schrödinger's cat. In this lecture I will give a very basic and introductory introduction to quantum computing and its current challenges.
DATE2012-02-20
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

"Meta-Morphogenesis of information processing in biological evolution, learning, development, and culture."

SPEAKERProfessor Aaron Sloman
PROFILEAfter a BSc in Mathematics and Physics at Cape Town in 1956, he went to Oxford, intending to continue mathematics, but was seduced by philosophy and obtained a DPhil in Philosophy of Mathematics (1962). Taught philosophy at Hull for two years, then from 1964 to 1991 at Sussex University, except for a year in Edinburgh, 1972-3. Encountered AI around 1969, and decided that philosophical progress required designing increasingly complex working fragments of minds of many kinds -- a very long, slow process. Published 'The Computer Revolution in Philosophy' in 1978[1] Helped to develop AI/Cognitive Science teaching and research and formation of COGS (Cognitive and Computing Sciences) at Sussex, and contributed to development and management of Poplog, a toolkit supporting teaching and research in AI.[2] Moved to Birmingham in 1991, continuing with interdisciplinary research in philosophy of mind, of mathematics, science, of language; AI and tools for research and teaching in AI; theories of development and evolution, including development and evolution of architectures, forms of representation, control mechanisms, visual processing, and reasoning, especially the role of environment in convergent evolution.[3] Elected Fellow of AAAI, AISB and ECAI. Hon DSc Sussex 2006. Currently retired, but doing research full time.

[1] http://www.cs.bham.ac.uk/research/projects/cogaff/crp/
[2] http://www.cs.bham.ac.uk/research/projects/poplog/freepoplog.html
[3] http://www.cs.bham.ac.uk/~axs/my-doings.html

ABSTRACT

Much of Turing's work was about how large numbers of relatively simple processes could cumulatively produce qualitatively new large scale results e.g. Turing-machine operations producing results comparable to results of human mathematical reasoning, and micro-interactions in physico-chemical structures producing global transformations as a fertilized egg becomes an animal or plant. In the same spirit, this talk presents some aspects of a draft theory of "meta-morphogenesis": processes and mechanisms involved in interactions between changing environments, changing animal morphologies, changing information processing capabilities and changing mechanisms for producing all these changes.

"Informed control", is a core feature of all life, starting with control of various kinds of physical behaviour, then later also informed control of information-processing in individuals, groups of individuals in one or more species, and in larger more abstract systems. By understanding the varied pressures leading to these changes and the many and varied results of such changes, we can gain new insights into issues addressed in a variety of disciplines, including computer science, AI/Robotics, cognitive science, neuroscience, psychology, psychiatry, linguistics, philosophy and education. I'll try to show (if time permits) how some of what we have learnt about types of information processing system in the last half century or so can illuminate philosophically puzzling features of animal minds, including the existence of "qualia", minds with mathematical intelligence, and the roles of precursors of human language required for perception, motivation, planning, plan execution, learning, and later development of languages for communication. This should also enhance our still incomplete understanding of requirements for future machines rivalling biological intelligence. One of many implications is the short-sightedness of some current theories of "embodied" cognition.

DATE2012-02-13
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

Computational Modelling of Tumour Growth

SPEAKERDr Matthew Hubbard School of Computing, University of Leeds : Home Page :
PROFILEMatthew is a Senior Lecturer in the School of Computing, University of Leeds, which he joined in September 2000. Prior to this he spent a number of years doing postdoctoral research, first in the Department of Mathematics at the University of Reading (where he also did his Ph.D.), and then in DAMTP at Cambridge University. He also has a couple of years of industrial experience, having worked for British Aerospace immediately after graduating from his first degree. His research areas are generally in the area of Scientific Computing and Computational Fluid Dynamics, but the main focus is on creating numerical methods which naturally retain the properties of the underlying partial differential equations.
ABSTRACTTumour growth is a complex and poorly understood process which is open to analysis by a range of mathematical and computational tools. These tools can be used to provide insight in to the biological processes which might cause patterns of behaviour seen in vitro and in vivo, leading ultimately to a model which can be used to simulate tumour growth.
This talk will consist of two parts. First, I will introduce some of the fundamental issues relating to mathematical and computational modelling, placing particular emphasis on biomedical processes. The second part of the talk will then describe a specific computational model of vascular tumour growth, developed in collaboration with Prof Helen Byrne of the University of Oxford. Computational simulations will be shown which demonstrate this model's ability to reproduce classical tumour structures.
DATE2012-01-30
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

"Making the most of the HEA"

SPEAKERDr. Mark Ratcliffe
ABSTRACTMark will talk about how departments can take full advantage of the HEA, in terms of available funding, workshops, conferences etc. Everyone at Aberystwyth could benefit financially in one way or another. He will also talk more broadly about work on employability, particularly in regard to how we can maximise student take-up of industrial placements.
DATE2012-01-23
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

"Diagnosing system failures in 2015 : What happens when systems with 13000 cpu's and 64 TB of memory go wrong."

SPEAKERDr Clive King
Senior Staff Engineer
PROFILEDr Clive King is a Senior Staff Engineer in Oracle Solaris Revenue Product Engineering. His main focus is on the diagnosis of complex stack performance, data integrity and availability issues on large enterprise class systems. He also likes to fix the underlying process problems so something similar does not happen again.

He has worked in Sun, now Oracle for 14 years. Previously he worked for Cray and at Aberystwyth University where he also gained a PhD in the area of Distributed Systems. He is B.C.S. Fellow, a member of the B.C.S. Accreditation Panel, an I.S.E.B. Chief Examiner and a PhD examiner.
ABSTRACT

In 1990, a typical Sun system had a single 10mhz cpu and 4MB of memory and might have run in the region of 30-50 processes. Today Oracle ships systems with 512 cpu's and 4TB of memory and such systems have a few

100,000 processes at most. The roadmap suggests that by 2015 this will rise to cpu counts around 13,000 and 150TB of memory to serve workloads in excess of 1 million processes.

Like C.S.I., when a system fails a post-mortem is required. A crash dump is the body, an image of the system memory at the time of failure. This talk looks at technical, logistical and tools challenges of diagnosing system failure after the fact when the body is 4TB in size and the challenges of scaling post-mortem failure diagnosis to ever larger configurations.

DATE2011-11-28
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

Visual Exploration of Time Series: the Multi-dimensional Data Challenge

SPEAKERDr Rita Borgo Home Page : Swansea University - Computer Science :
ABSTRACT

Interactive exploration of time series data has to face challenges coming from the increasing of both data size and richness of carried information.

A third parallel issue, of particular relevance to visualization, comes from the inherent human limitations to process large amount of information, aspect which seriously constrains visual display of data.

These three factors currently dominate the design of new visualization solutions to support the exploration process of time-series data in search of interesting features and trends./

In this talk we will present an overview of work developed at Swansea University to tackle these challenges.

Three majour results in the fields of video visualization and remote sensing data analysis will be presented together with some future directions and open questions.

DATE2011-11-14
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

The Danger of Software Patents

SPEAKERDr Richard Stallman
President
Free Software Foundation Free Software Foundation : E-mail :
PROFILERichard Stallman launched the free software movement in 1983 and started the development of the GNU operating system (see www.gnu.org) in 1984. GNU is free software: everyone has the freedom to copy it and redistribute it, as well as to make changes either large or small. The GNU/Linux system, basically the GNU operating system with Linux added, is used on tens of millions of computers today. Stallman has received the ACM Grace Hopper Award, a MacArthur Foundation fellowship, the Electronic Frontier Foundation's Pioneer Award, and the the Takeda Award for Social/Economic Betterment, as well as several honorary doctorates.
ABSTRACTRichard Stallman will explain how software patents obstruct software development. Software patents are patents that cover software ideas. They restrict the development of software, so that every design decision brings a risk of getting sued. Patents in other fields restrict factories, but software patents restrict every computer user. Economic research shows that they even retard progress.
DATE2011-10-31
TIME16:10:00
PLACEArts Centre Theatre


TITLE

Multiple Criteria Decision Making and Systems Design

SPEAKERProfessor Peter Fleming Home Page : University of Sheffield : E-Mail :
PROFILEPeter Fleming is Professor of Industrial Systems and Control in the Department of Automatic Control and Systems Engineering and Director of the Rolls-Royce University Technology Centre for Control and Systems Engineering at the University of Sheffield, UK. His control and systems engineering research interests include control system design, system health monitoring, multi-criteria decision-making, optimisation and scheduling, and applications of e-Science. He has over 400 research publications, including six books, and his research interests have led to the development of close links with a variety of industries in sectors such as automotive, aerospace, energy, food processing, pharmaceuticals and manufacturing. He is a Fellow of the Royal Academy of Engineering, a Fellow of the International Federation of Automatic Control, a Fellow of the Institution of Engineering Technology, a Fellow of the Institute of Measurement and Control, and is Editor-in-Chief of International Journal of Systems Science.
ABSTRACTDesign problems arising in control and systems can often be conveniently formulated as multi-criteria decision-making problems. Inevitably, these problems often comprise a relatively large number of criteria. Many-objective optimisation poses difficulties for multiobjective optimisation algorithms which have been designed to solve problems with two or three objectives and alternative approaches for addressing many objectives will be described. Through close association with designers in industry, a range of machine learning tools and associated techniques have been devised to address the special requirements of many-criteria decision-making. These include visualisation and analysis tools to aid the identification of conflicting and non-conflicting criteria, interactive preference articulation techniques to assist in interrogating the search region of interest and methods for exploring design options for cases where constraints may be relaxed or tightened. Industrial design exercises will demonstrate these approaches.
DATE2011-10-03
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

Towards a 'language' of facial expressions - from cognition to computation

SPEAKERDR. Christian Wallraven E-Mail : Institution :
ABSTRACTThe face is capable of producing an astonishing variety of movements ranging from larger scale head movements to minute muscle twitches that are barely visible. Equally astonishing are the perceptual and cognitive processes with which we humans decode these signals in order to identify someone's particular smile, read the mood a person is in, or detect whether a comment was meant seriously or ironically. In both the cognitive and the computational sciences, however, the focus of research has been largely on the so-called universal expressions - expressions such as anger and fear which carry strong emotional contents and which are commonly identified across cultures. While important, in daily life, these universal expressions occur relatively rarely with conversational and communicative facial expressions such as slight smiles, or bored faces being much more common. Incidentally, these expressions are usually also much more subtle in terms of the facial movements making them much harder to detect and process computationally, for example. In this talk, I will describe our recent research in two areas: first, a summary of cross-cultural studies investigating the perceptual and cognitive processes in decoding of complex facial expressions, and, second, a brief introduction into the research in collaboration with Cardiff University in which we attempt to model and manipulate facial performances during long conversations.
DATE2011-07-14
TIME11:10:00
PLACEB20 Llandinham Building


TITLE

Image-Based Biomedical Modeling, Simulation and Visualization

SPEAKERChuck Hansen Scientific Computing and Imaging Institute University of Utah : Home Page :
ABSTRACTIncreasingly, biomedical researchers need to build functional computer models from images (MRI, CT, EM, etc.). The "pipeline" for building such computer models includes image analysis (segmentation, registration, filtering), geometric modeling (surface and volume mesh generation), large-scale simulation (parallel computing, GPUs), large-scale visualization and evaluation (uncertainty, error). In my presentation, I will present research challenges and software tools for image-based biomedical modeling, simulation and visualization and discuss their application for solving important research and clinical problems in neuroscience, cardiology, and genetics.
DATE2011-06-17
TIME12:00:00
PLACEPhysical Sciences Lecture Theatre B


TITLE

Biology becomes Data Intensive

The Challenges of Data Integration for Systems Biologists

SPEAKERChris Rawlings, Bioinformatics and Biomathematics, Rothamsted Research
ABSTRACT

Biology is rapidly being re-shaped as a data-intensive science as biologists are faced with ever increasing challenges from both the scale and complexity of the data being generated by transformational technologies such as next generation genome sequencing techniques.

Furthermore, the adoption of systems approaches to biological research to address some of the grand challenges in medicine and agriculture require many diverse types of complex data to be brought together in ways that were not previously envisaged. These challenges bring data integration techniques to the fore as one of the unsolved problems for

Bioinformatics. In this seminar I will introduce the open source graph-based Ondex data integration and visualisation system that we have been developing at Rothamsted and show examples of how it has been used in a range of systems biology projects by solving practical problems in syntactic and semantic data integration.

DATE2011-06-08
TIME13:00:00
PLACEPhysical Sciences Lecture Theatre A


TITLE

Paper, Geometry and Money

SPEAKERProfessor Roger Boyle Home Page : School of Computing, University of Leeds :
ABSTRACTWe present an overview of three current or recent projects at Leeds. The study of paper by codicologsts and papyrologists has many motivations; often the material is rare, delicate and very “difficult” to analyse and see. We consider a particular pair of case studies of archaic Arabic material that has yielded to a model based attack, succeeded by statistically enhanced template matching. Many visual surveillance applications deploy single uncalibrated cameras over uncalibrated scenes; recapture of any 3D information then requires some constraint to be enforced. We have used expectations about the speed distributions in congested scenes to perform this task in a novel manner. Undergraduates have skills that they often freelance, and the demand for these often exceeds supply. We describe a model for matching skills and requirements that remunerates, and provides CV-fodder in a manner that benefits students, employers and the host university.
DATE2011-03-28
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

ICT Disaster Recovery at Aberystwyth University

SPEAKERTim Davies IS Home Page :
PROFILEAssistant Director: ICT and Customer Services Information Services, Aberystwyth University
ABSTRACTThe talk will cover the importance of having a DR plan, and testing / training. We will cover Information Services' "Disaster Day" - why we do it and the benefits. Furthermore, it will delve into some of the technologies and methods that IS use to achieve service continuity.
DATE2011-03-14
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Recent Advances in Biometrics, including Gait and Ear

SPEAKERProfessor Mark Nixon School of Electronics and Computer Science : Home Page :
PROFILEMark Nixon is the Professor in Computer Vision at the University of Southampton UK. His research interests are in image processing and computer vision. His team develops new techniques for static and moving shape extraction which have found application in automatic face and automatic gait recognition and in medical image analysis. His team were early workers in face recognition, later came to pioneer gait recognition and more recently joined the pioneers of ear biometrics. Amongst research contracts, he was Principal Investigator with John Carter on the DARPA supported project Automatic Gait Recognition for Human ID at a Distance. His vision textbook, co-written with Alberto Aguado, Feature Extraction and Image Processing (Academic Press) reached 2nd Edition in 2008. With Tieniu Tan and Rama Chellappa, their book Human ID based on Gait was published in 2005. Prof. Nixon is a member of the IEEE and Fellow IET and FIAPR.
ABSTRACTBiometrics concerns recognising people automatically by personal characteristics. By computer vision, biometrics can identify people whilst enjoying the advantages of data acquisition without subject contact (or cooperation). The non invasive biometric of greatest interest is automatic face recognition and this has led to practical deployment. Others have been developing too: people can be recognised by the way they walk, their gait, and by their ear. There are even approaches which rely on human description, co-related to video information. These approaches are then suited to applications beyond access control: they can be deployed and refined for surveillance, and there is emergent interest in their deployment in forensics. This talk will survey the state-of-art in these approaches and consider ways in which the UK can benefit from deployment not just as security mechanisms, but also their wider deployment to aid and abet society’s progress.
DATE2011-02-28
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Postgraduate/Research Opportunities within the Department of Computer Science

SPEAKERRepresentatives of Departmental Research Groups
ABSTRACTIf you are interested in studying for an Advanced MSc, MPhil or PhD degree, then this talk is for you. There will be a short overview of what it means to study for these degrees, and issues such as period of study, money, University Competition for funding etc. will be mentioned. The remainder of the seminar will consist of talks from the four departmental Research Groups. They will present examples of current research within their groups and provide a flavour of possible future research projects that you might be interested in. Our four research groups are: Advanced Reasoning; Computational Biology; Intelligent Robotics and the Vision, Graphics and Visualisation group.
DATE2011-02-07
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Linux: Evolving A Complex System On The Fly

SPEAKERAlan Cox Linux, Intel :
ABSTRACT

The Linux kernel grows at over a line a minute, and a line of code changes every fifteen seconds. Releases are done about quarterly and there is no separate long term development codebase.

This seminar explores the history of the kernel development process and how over a thousand people with no formal management structure continually re-engineer a complex system.

DATE2011-01-31
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Contradiction and Inconsistency in Fuzzy Sets

SPEAKERProfessor Chris Hinde Computer Science, Loughborough University :
PROFILEProfile Page
ABSTRACTFuzzy sets are useful for modelling vague concepts, in 1983 Atanassov introduced Intuitionistic Fuzzy Sets based on membership and non-membership values. The evidence supporting these is in the form of elimination of possibilities as used in fuzzy sets and also the support for non-possibilities to derive the non-membership function. There is the possibility of contradiction arising from these two functions and a logic for processing contradictory fuzzy sets is developed. The existence of contradiction and inconsistency is usually regarded as something to be avoided but both can be used to derive knowledge about the world on the assumption that the real world is neither contradictory nor inconsistent.
DATE2010-12-06
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Evil on the Internet

SPEAKERDr Richard Clayton Home Page :
PROFILEDr Richard Clayton is a security researcher in the Computer Laboratory of the University of Cambridge. He's been studying wickedness on the Internet for years; be it spam, denial of service attacks (intentional and inadvertent), and particularly phishing, the use of fake bank webpages to steal credentials -- and later all of your money.
He is currently collaborating with the National Physical Laboratory (NPL), and spending half his time in Teddington, on a project that will develop robust and accurate measurements of Internet security mechanisms.
ABSTRACTThere's a lot of evil things on the Internet if you know where to look for them. Phishing websites collect banking credentials; mule recruitment websites entice people into money laundering; fake escrow sites defraud the winners of online auctions; fake banks hold the cash for fake African dictators; and there are even Ponzi scheme websites where (almost) everyone knows that they're a scam. This talk will show you live examples of these sites, explain how they work, and tell you what little we currently know about the criminals who operate them.
DATE2010-11-22
TIME16:10:00
PLACEPhysics Lecture Room 320


TITLE

Non-Stationary Fuzzy Reasoning in Clinical Decision Support

SPEAKERDr Jon Garibaldi Intelligent Modelling and Analysis Research Group, School of Computer Science, University of Nottingham :
PROFILEProfile Page
ABSTRACTFuzzy sets were introduced by Zadeh in the 1960s, and were subsequently expanded into a complete systematic framework for dealing with uncertainty. As part of the generic fuzzy methodologies, fuzzy inference systems were proposed for the modelling of human reasoning with uncertain data and knowledge. However, standard fuzzy sets and fuzzy reasoning do not model the variability in decision making that is typically exhibited by all human experts in any domain. Variation may occur among the decisions of a panel of human experts (inter-expert variability), as well as in the decisions of an individual expert over time (intra-expert variability).
Dr Garibaldi has introduced the concept of non-stationary fuzzy sets, in which small changes (perturbations) are introduced in the membership functions associated with the linguistic terms of a fuzzy inference system. These small changes mean that each time a fuzzy inference system is run with the same data, a different result is obtained. It is straight-forward to extend this notion to create an ensemble fuzzy inference system featuring non-stationary fuzzy sets. In this talk (aimed at an audience not completely familiar with fuzzy methods), non-stationary fuzzy sets and reasoning will be explained in detail, and its use in several real-world scenarios of decision support in medical contexts will be described. Results will be presented to demonstrate the benefits of non-stationary fuzzy reasoning.
DATE2010-11-08
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Robotic Exploration Challenges: 2010 to 2030 and beyond

SPEAKERProf Dave Barnes Home Page : Department Page :
PROFILEDave Barnes is Professor of Space & Planetary Robotics and has been active in robotics research for over 25 years. He is a member of the STFC Aurora Advisory Committee (AurAC), and the STFC Particle Physics, Astronomy and Nuclear Physics Science (PPAN) committee. He was a member of the 2003 Beagle 2 Mars lander consortium with responsibilities for the calibration of the Beagle 2 robot ARM, and for generating a virtual Beagle 2 model for rehearsals, planning and ARM commanding during the Mars mission. He was a member of the EADS Astrium Ltd. led team for the ESA ExoMars Rover Phase A Study. He is a Co-I on the ExoMars Panoramic Camera (PanCam) team for the 2018 mission, and is a member of the ESA Cosmic Vision Marco Polo mission team.
ABSTRACTThe use of robots for planetary exploration will create many new challenges over the coming decades. NASA is planning a number of Mars missions such as the Mars Science Laboratory (MSL) which is scheduled for launch in 2011. Building upon the successful Mars Express mission, ESA and NASA are working to fly new missions to Mars such as the ExoMars rover in 2018, and an eventual Mars Sample Return (MSR) mission. With each new mission greater demands are placed upon planetary robotics know-how, and new challenges have begun to emerge. The demand for greater science return and reduced operation costs is moving planetary robotics towards greater autonomy. Future planetary robots will need to travel further and faster, and conduct opportunistic science on the way!

In addition to navigation, autonomous localisation and autonomous scientific sample acquisition will be required. Scientists want to go to new locations on Mars where current wheeled locomotion is impractical, and aerobot technology is becoming a real possibility for future missions. Issues such as planetary robot survivability and longevity are key research areas that need to be addressed. As always these challenges are set against the `ideal' engineering requirement for zero mass, zero power and zero volume. This presentation will focus upon the future challenges for planetary robotic exploration from 2010 to 2030 and beyond.

After the seminar there will be

Refreshments at 5:30pm followed by BCS AGM and talk at 6pm

DATE2010-10-25
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

The Automatic Identification of Non-Growing Follicles in Human Ovaries

SPEAKERTom Kelsey Home Page : School of Computer Science University of St Andrews :
ABSTRACTThe human ovary contains a fixed number of non-growing follicles (NGF) established before birth; this number declines with increasing age culminating in the menopause at 50-51 years on average. NGF populations are estimated using a standard methodology: the ovary is fixed, thin slices (5-20 micrometres) are taken at regular intervals, and these are stained with hematoxylin and eosin (HE). Sample regions are photographed, with the NGFs appearing in these images counted by hand.
Assuming an even distribution of NGFs throughout the ovary, the population is estimated by integration. This process is time consuming, and suffers from human mis-classification, integration error due to small sample sizes, and the inconsistent assumption of even distribution. In this talk I present a combined histological and automatic feature detection approach, leading to reduced human and sampling errors at low magnification and which can, in principle, be used to obtain almost exact NGF populations from fully sectioned ovaries.
DATE2010-04-26
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

MOEA/D and RM-MEDA: Two Recent Multiobjective Optimization Methods

SPEAKERProfessor Qingfu Zhang Home Page : School of Computer Science & Electronic Engineering, University of Essex :
ABSTRACTMultiobjective Evolutionary Algorithms (MOEA) are one of the hottest topics in the area of evolutionary computation. A multiobjective optimisation problem (MOP) may have many, or even infinite Pareto optimal solutions. MOEAs aim at finding a number of well-representative Pareto solutions for a decision maker. Most current MOEAs do not take advantage of the results in traditional mathematical programming. MOEA/D and RM-MEDA are two very recent MOEAs, developed at Essex, which uses ideas from traditional optimisation methods. In this talk, I will explain the motivations, ideas, and main steps of these two methods, and show you some experimental results.
DATE2010-03-22
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Computing for the Future of the Planet

SPEAKERProfessor Andy Hopper, CBE, FREng, FRS Home Page : The Computer Laboratory, University of Cambridge :
ABSTRACTDigital technology is becoming an indispensable and crucial component of our lives, society, and environment. A framework for computing in the context of problems facing the planet will be presented. The framework has a number of goals: an optimal digital infrastructure, sensing and optimising with a global world model, reliably predicting and reacting to our environment, and digital alternatives to physical activities.
DATE2010-03-15
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Automatic Fault Detection for Autosub6000

SPEAKERDr Richard Dearden Home Page : School of Computer Science, The University of Birmingham :
PROFILERichard Dearden is a Senior Lecturer at the University of Birmingham. His research interests are in the area of reasoning under uncertainty, including wirk in planning, fault diagnosis, robotics and other aspects of statistical AI. He is PI for three current projects, AFDA, a NERC-funded project to add fault diagnosis to an underwater vehicle, CogX, an FP7-funded cognitive robotics project, and GeRT, also EU-funded, on generalising robot programs to learn planning operators. Previously he was at NASA Ames Research Center, where he led the Model-Based Diagnosis and Recovery Group. His Ph.D. in 1999 was on planning and reinforcement learning in uncertain worlds, from the University of British Columbia.
ABSTRACTState estimation and fault detection are important components of robotic systems. A number of approaches have been applied to the problem, but in recent years there have been significant successes for model-based approaches. In this talk I will describe two model-based diagnosis techniques, Livingstone 2, a discrete consistency-based approach and a hybrid system approach based on particle filtering. We are using both approaches as part of AFDA (Automated Fault Detection for Autosub6000), a three year NERC-funded project to provide fault detection technology for a deep-diving autonomous underwater vehicle operated by the National Oceanographic Centre. As well as describing the approaches, I will also discuss how we are applying them to this project.
DATE2010-03-08
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Bridging the gap between Formal and Computational Semantics

SPEAKERProfessor Stephen Pulman Home Page : Oxford University Computing Laboratory :
ABSTRACTNow that reasonably accurate wide-coverage parsers are available, it should be possible to increase semantic coverage of sentences too.

The literature in formal linguistic semantics contains a wealth of fine grained and detailed analyses of many linguistic phenomena. But very little of this work has found its way into implementations, despite a widespread feeling (among linguists at least) that this can't be very difficult: just fix a grammar to produce the right logical forms and hook them up to a theorem prover. In this talk I take a representative analysis of adjectival comparatives and ask what steps one would have to go through so as to use this analysis in a computational setting like open domain question-answering. I then try to identify some general conclusions that can be drawn from this exercise.
DATE2010-03-01
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Biologically Inspired Robotics: Neuromechanics and Control

SPEAKERDr. Ravi Vaidyanathan, Bristol Robotics Laboratory Home Page :
PROFILESenior Lecturer in Biodynamics
ABSTRACTComplex behavior may be viewed as an emergent phenomenon resulting from the interaction of an entity with its environment through sensory-motor activity. From a systems perspective, the dynamic morphology of a structure plays a critical computational role in this process; in effect subsuming portions of the control architecture. In animals, for example, intrinsic properties of the musculoskeletal system augment the neural stabilization of the organism for an array of critical of functions.
Invertebrates, in particular, have been able to exploit a wide range of behavioral niches because they utilize a body plan that can be modified to create functional adaptations optimized for a particular role. The talk will review basic methodologies for the enhancement of engineering (robotic) design based upon biological studies of animal behavior from a hierarchical systems perspective with emphasis on coupling between mechanics and control systems. Architectures founded upon biological inspiration will be summarized with specific examples from the speaker's work, including recent research that has been featured in New Scientist, Flight Global Magazine, The Engineer, and on television specials produced by the Discovery Channel and Tokyo Broadcasting Systems.
Applications highlighted will include medical and mobile robotic systems including the (pictured) Morphing Micro Air-Land Vehicle and the Bristol Hand Rehabilitation Robot.
DATE2010-02-22
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Towards Robust Autonomy in 'Large Worlds'

SPEAKERDr Subramanian Ramamoorthy Home Page :
PROFILESubramanian Ramamoorthy is a Lecturer in the School of Informatics at The University of Edinburgh. Within the School, he is associated with the Institute of Perception, Action and Behaviour and the Informatics Life Sciences Institute.
Prior to that, he was in the Intelligent Robotics Lab at The Univeraity of Texas at Austin, where he received his Ph.D. His research is centred on sequential decision problems involving complex dynamical systems, motivated by applications in robotics and autonomous agent design, addressed using a combination of techniques from machine learning and mathematical systems theory. In addition to his academic experience, he has spent several years in various research and development groups at National Instruments Corp., working on algorithmic tools for motion control, computer vision and dynamic simulation.
ABSTRACTOne of my long term research goals is to create autonomous robots, e.g., humanoids, that are capable of functioning effectively in "large worlds", i.e., in environments with significant structural and quantitative uncertainty. Along this path, the core technical questions that must be addressed are those of sequential decision making. What makes these problems challenging is the confluence of two issues: (a) How could one encode dynamically dexterous behaviours in these high-dimensional constrained nonlinear dynamical systems so that the corresponding decision problems are tractable in online, resource-constrained settings? (b) How could an autonomous robot come to terms with a continually changing environment and task specification? I will outline a factored approach to solving these problems that involves two types of technical tools.
The problem of task encoding can be addressed by taking a geometric view of dynamics. Many interesting behaviours involving humanoid robots admit low-dimensional and abstract descriptions, e.g., all trajectories corresponding to a dynamical behaviour may be restricted to a submanifold in configuration space. I will first describe where this structure comes from, using a concrete example involving bipedal walking on irregular terrain. Then, I will describe an algorithmic procedure for learning this structure from data and utilizing it for motion synthesis, in the absence of analytical models of the system dynamics.
To the extent that a large class of interesting behaviours may be viewed in such geometric terms, one may pose the problem of coping with the changing environment and task specifications as a generic problem of adversarial navigation in these spaces. I will outline a game theoretic procedure for solving this problem, wherein the agent utilizes a set of learnt primitives to synthesize a composite strategy that constitutes the equilibrium of a game against nature.
These results represent initial steps in a larger research program, towards robust autonomous agents in large worlds. I will conclude with some remarks regarding current and future work in this direction.
DATE2009-12-07
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Assisted Prostatic Biopsy. Problems, Perspectives and Challlenges

SPEAKERDr Robert Marti, University of Girona Home Page :
ABSTRACTThe talk will focus on the work developed in the recent years in prostate guided biopsy in the Medical Image Analysis lab of the VICOROB group. I will initially give an overwiew of the problem and motivation of the project. I will then present the developed methods focusing on the image fusion problem involving computer vision topics such as multi-modal rigid and non-rigid image registration, atlas based image segmentation and image reconstruction applied to ultrasound (US) and magnetic ressonance images (MRI). Finally, open challenges and future directions will be discussed.
DATE2009-11-30
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

An introduction to Scrum: the agile project management methodology

SPEAKERGeoff Watts Inspect and Adapt : Home Page :
ABSTRACTThis seminar will cover the basics of the Scrum framework, the roles within a Scrum team, and their responsibilities.
DATE2009-11-19
TIME13:10:00
PLACEHugh Owen C22


TITLE

An Introduction to Phylogenetic Networks

SPEAKERProfessor Vincent Moulton Home Page :
ABSTRACTIt has now been 150 years since Charles Darwin presented his theory on the origin of species, asserting that all organisms are related to one other by common descent via a “tree of life”. Since then, biologists have been able to piece together a great deal of information concerning this tree – benefiting in more recent times from the advent of ever cheaper and faster DNA sequencing technologies. Even so, it is now commonly accepted that certain organisms such as plants and viruses - including, for example, swine flu - commonly swap their genetic information, and so representing their evolution by a tree can in certain cases be somewhat misleading. In such cases phylogenetic networks can provide a useful alternative to a tree. In this talk we will present a brief introduction to phylogenetic networks, and will mention some recent results and open problems within this burgeoning area of computational biology.
DATE2009-11-09
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Invariant Manifolds for Model Reduction
(with examples from physical and chemical kinetics)

SPEAKERProfessor Alexander N Gorban Home Page :
ABSTRACTFor dynamical models describing the behaviour of large-scale complex systems, one of the most powerful and rigorous approaches to model reduction is based on the notion of the system's slow invariant manifold. The theory of invariant manifolds was introduced more than a century ago through the work of two legendary figures of mathematics, Lyapunov and Poincare. It experienced intense development during the 20th century and is currently being vigorously revisited and re-examined as an important and powerful tool in applied mathematics used for mathematical modelling and model reduction purposes.

In this talk I would like go review the theory of invariance equation and application of this theory to model reduction in dissipative systems. I will try to answer the following question: How to find the slow invariant manifold? How to use an approximate slow invariant manifold for model reduction? Why should we attempt to reduce the description in the times of supercomputers? A collection of methods to derive analytically and to compute numerically the slow invariant manifolds is presented. The theory is illustrated by examples from dynamics of highly non-equilibrium gas flows and chemical reactions kinetics.
DATE2009-10-26
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Two new applications for stereo-based 3D surface scanning

SPEAKERProfessor Bob Fisher Home Page : rbf@inf.ed.ac.uk : School of Informatics, University of Edinburgh :
PROFILE 
ABSTRACT1) Skin cancer diagnosis The addition of 3D shape data to colour information from a dense stereo sensor improves both the segmentation and diagnosis of lesions. We apply this to several types of skin cancer that have not previously had much image analysis research.

2) Feature extraction from flying bats As part of an acoustic sonar project, we are using a 500 frame per second range sensor to observe position and shape changes of bats as they capture prey. We will describe the new sensor and show some examples of the information extracted.
DATE2009-05-04
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Counting Computers (and figuring out what they do)

SPEAKERAndy Ormsby WWW : andy@a-ormsby.co.uk :
PROFILE 
ABSTRACTThere are something like 7,000 data centres in the US and many more around the world. In total, there are probably something like 44 million servers sitting in data centres. The numbers continue to grow quickly. By 2020, data centres are projected to collectively have a greater carbon footprint than aviation. But the companies that own and run these data centres often have difficulty in answering even the most basic questions, such as "what servers do I have?", "what applications are running in my data centre?" or perhaps more the more basic "what can I turn off?".
In this talk, I'll describe some of the technology that is used to help people discover the answers to questions like this and provide some insight into large scale IT along the way.
DATE2009-03-16
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Digital TV, MHEG-5 and Java

SPEAKERJohn Hunt Personal Website : jeh@midmarsh.co.uk :
PROFILE 
ABSTRACTDigital TV is about to take over as the only option available for terestial TV. However, Digitial TV is about more than just watching TV, its about interacting with the TV, running applications on your Set Top Box (STB). These applications may be simple electronic programme guides, games or (via a return line) interactive server oriented e-commerce systems. However, the software used to run these systems is still developing., many STBs are now using Linux, some run Java, others still use MHEG-5, others are starting to use Adobe AIR or Flash Lite. What does this mean for the future of the humble Set Top Box, will this help to create the connected home of the future and what do MHEG-5 or Java applications look like anyway?
This talk will explore the current directions, illustrate some free to air digital applications using MHEG-5 and Java and consider where the current trajectory could take us as the STB becomes the central hub of the connected home.
DATE2009-03-02
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

How to Build an Effective Team --- Evolving Neural Network Ensembles

SPEAKERProfessor Xin Yao Institute : Home Page : Contact Details :
PROFILE  
ABSTRACTPresvious work on evolving neural networks has focused on single neural networks. However, monolithic neural networks become too complex to train and evolve for large and complex problems. It is often better to design a collection of simpler neural networks that work collectively and cooperatively to solve a large and complex problem. The key issue here is how to design such a collection automatically so that it has the best generalisation. This talk introduces some recent work on evolving neural network ensembles, including negative correlation, constructive negative correlation and multi-objective approaches to ensemble learning.
DATE2009-02-16
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Factorial Switching Linear Dynamical Systems for Neonatal Condition Monitoring

SPEAKERProfessor Chris Williams ckiw@inf.ed.ac.uk :
PROFILEInstitute and Home Page
ABSTRACTCondition monitoring often involves the analysis of measurements taken from a system which ``switches'' between different modes of operation in some way. Given a sequence of observations, the task is to infer which possible condition (or "switch setting") of the system is most likely at each time frame. In this paper we describe the use of factorial switching linear dynamical models for such problems. A particular advantage of this construction is that it provides a framework in which domain knowledge about the system being analysed can easily be incorporated.
We demonstrate the flexibility of this type of model by applying it to the problem of monitoring the condition of a premature baby receiving intensive care. The state of health of a baby cannot be observed directly, but different underlying factors are associated with particular patterns of measurements, e.g. in the heart rate, blood pressure and temperature. We use the model to infer the presence of two different types of factors: common, recognisable regimes (e.g. certain artifacts or common physiological phenomena), and novel patterns which are clinically significant but have unknown cause. Experimental results are given which show the developed methods to be effective on real intensive care unit monitoring data.
Joint work with John Quinn, Neil McIntosh
DATE2009-02-02
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Research Opportunities within the Department of Computer Science

SPEAKERLeaders of Departmental Research Groups
PROFILE 
ABSTRACTIf you are interested in studying for an MPhil or PhD degree, then this talk is for you. Prof Shen who is the Department Director of Research will present a short overview of what it means to study for these degrees, and issues such as period of study, money, University Competition for funding etc. will be mentioned. The remainder of the seminar will consist of talks from the four department Research Group Heads. They will present examples of current research within their groups and provide a flavour of possible future research projects that you might be interested in. Our four research groups are: Advanced Reasoning; Computational Biology; Intelligent Robotics and the Vision, Graphics and Visualisation group.
DATE2008-12-08
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Some novel developments in dimensionality reduction for classification

SPEAKERDr Guido Sanguinetti Dr Guido Sanguinetti : Machine Learning Group : Department of Computer Science : University of Sheffield :
PROFILE 
ABSTRACTCommon dimensionality reduction techniques such as PCA and generalisations address the problem of finding lower dimensional representation of data based on variance considerations. However, the most varying directions need not be the most interesting: for example, if a high dimensional data set is known to contain clusters, the best dimensionality reduction will extract features that best discriminate between clusters, rather than capturing the most variance. We exploit this idea and introduce a latent variable model that extracts at maximum likelihood optimal discriminative features (in the sense of Fisher's discriminant) without access to label information. We then extend the framework to address the semi-supervised problem and possible non-linear extensions.
DATE2008-12-01
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Freud, Ego, and Robotic Autonomy

SPEAKERDerek Smith (DSmith@uwic.ac.uk) Home Page :
PROFILEDuring the 1980s Derek Smith was with British Telecom, Cardiff, where he specialised in the design and operation of very large CA-IDMS "semantic network" databases. Since 1991 he has taught neuropsychology to Psychology and Speech and Language Therapy undergraduates. He is working currently in association with International Software Products, Toronto, on "Konrad", an artificial consciousness project using a CA-IDMS platform.
ABSTRACTIn this talk, the speaker will be discussing just how much modern computing already owes to the father of psychoanalysis, Sigmund Freud.
High on the surprisingly long list of achievements is Freud's seminal work on the modular architecture of the human lexico-semantic system and the uncanny accuracy of his predictions of real-time biological control structures, sometimes a full lifetime before the machines existed to put his ideas into practice. The talk will also highlight a number of areas where Freud still has much to offer researchers in the domains of artificial intelligence in general, and artificial consciousness in particular.
DATE2008-11-17
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Aropä and PeerWise: Supporting Student Contributed Pedagogy in Large Classes

SPEAKERJohn Hamer (J.Hamer@cs.auckland.ac.nz) Home Page :
PROFILEJohn Hamer is a Senior Lecturer in Computer Science at the University of Auckland. His research interests include how novices learn programming, and the use of student contributing pedagogies. In 2004 he developed Aropa, an award-winning tool that supports peer assessment, and which is now used by over 1,000 students each semester in a diverse range courses including Commercial Law, English, Engineering, Medical Science and Pharmacology.
ABSTRACTAropä and PeerWise are two web-based tools that support collaborative learning in large, undergraduate classes. Aropä manages peer assessment activities, allowing students to take part in double-blind refereeing of their peers' coursework. PeerWise is a data bank of multi-choice questions contributed, explained and discussed entirely by students.
These systems leverage the latent intellectual capacity of a large class to provide new opportunities for learning. Using Aropä, each student might review three or four essays and receive a corresponding amount of feedback, all within a few days. The immediacy and diversity of the feedback is substantially greater than can be produced by a tutor.
While the quality of the reviewing is typically variable, there are affective benefits in challenging students to distinguish between good and poor feedback. By eliminating the stamp of authority and introducing diverse, possibly conflicting feedback, students are required to exercise their critical judgement in deciding what information to accept and reject. Moreover, tutor marking can still be used, and can even be mixed in with the peer reviewing.
PeerWise leverages the energy of a large class in a different way, building an annotated question bank that can contain thousands of multiple-choice questions. Each question is accompanied by an explanation written by the question author, overall quality and difficult ratings assigned by students who have answered the question, and possibly a forum in which misunderstandings and possible improvements are discussed. The question bank thus serves two complementary purposes: a creative medium in which students engage in deep learning and critical reflection; and a drill-and-test library for developing fluency with the course content.
We have statistical evidence to show that active use of these tools strongly correlates with learning. Further, as a side-effect of channelling all interaction through a central database, a detailed record of student interaction is collected. This record allows instructors to monitor overall class performance and to assess individual students over time in modes that limit opportunities for plagiarism. With routine use, a rich picture of student performance is collected.
We are currently at the point of building additional tools to further exploit the interaction data. These include reputation systems, whereby the quality of an individual's comments and feedback is judged by the recipients, and recommender systems, in which participants are able to highlight instances of high quality work. Both of these ideas are present in popular online auction and shopping sites, but have not been widely adapted for educational use.
The talk will describe the Aropä and PeerWise tools, discuss the education theory behind the ideas, present results from the ongoing research study into student learning and attitudes toward the tools, and elaborate some of our ideas for future development.
DATE2008-11-03
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Modelling the growth of flowers

SPEAKERProfessor Andrew Bangham Department of Computing Sciences, University of East Anglia : A.Bangham@uea.ac.uk :
PROFILEClick to view the Professor Andrew Bangham profile
ABSTRACTThe relationship between gene expression and growth is often speculated on but here we describe an attempt to model their relationship with the help of finite element models.
DATE2008-10-20
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

A Vision for the Science of Computing

SPEAKERSir Anthony Hoare Sir Anthony Hoare : Microsoft Reseach : http://en.wikipedia.org/wiki/Tony_Hoare :
PROFILEhttp://en.wikipedia.org/wiki/Tony_Hoare
ABSTRACTI have a vision of the day when software is the most reliable component of any product or system which it controls. I have a vision of the day when software developers are regarded as the most reliable of professional engineers. Both of these visions will be advanced by development of our understanding of the basic Science of Computing and its embodiment in Design Automation tools for Software Engineering.
One of our basic topics of study in Computer Science is the computer program itself. Like other basic scientists, we address the most general and the most fundamental questions: ‘What does the program do?’ ‘How does it do it?’ ‘Why does it work?’ and ‘How do we know that the answers to all these questions are correct?’ As long as software engineers cannot answer these questions, we are unlikely to reduce the current prevalence of programming error.
For routine application in Software Engineering, the answers discovered by scientific research will be embodied in a suite of Design Automation tools, with coherent coverage of all phases in the lifetime of programs, from requirements analysis through specification, design, coding testing, delivery and subsequent maintenance and evolution. As in other branches of engineering, these tools will automate all necessary deductions and calculations, and will thereby conceal from the professional engineer the unpopular fact that the language of Science, even Computing Science, is mathematics.
The final condition for widespread acceptance of the tools is the provision of a substantial corpus of case studies of their successful application in all phases of program development. These case studies will be generalised as widely applicable design patterns for adaptation and re-use in subsequent programming projects addressing the same area of application. Initially, the case studies will be selected and constructed by the scientists themselves, and used to assess the applicability of theory and the advancement of technology of tool construction. This work has already started in many centres of research throughout the world.
In summary, the achievement of my vision will depend on a high degree of co-operation and objectively decided competition between rival and complementary branches of Computing Science. It requires an increase in the scale and ambition of our research goals which is characteristic of other mature branches of science. Do we have the courage to make such a dramatic shift in our research culture?
DATE2008-05-23
TIME14:00:00
PLACEPhysics Main Lecture Theatre


TITLE

"EKOSS - a knowledge creator centered system for supporting the sharing, discovery, and integration of expert knowledge"
and
"Science Integration Programme - Human"

SPEAKERYasunori Yamamoto and Steven Kraines, University of Tokyo, Japan. EKOSS : Science Integration Program : OReFiL : Anatomography :
PROFILEThe Science Integration Programme is headed by Professor Toshihisa Takagi from the Department of Computational Biology in the Graduate School of Frontier Sciences at the University of Tokyo. The programme is currently composed of four full time faculty members, one research fellow, and one adjunct faculty member.
ABSTRACT"EKOSS - a knowledge creator centered system for supporting the sharing, discovery, and integration of expert knowledge" Leveraging recent developments in semantic web technologies and artificial intelligence, particularly web-based ontologies and logical inference reasoners, the EKOSS (Expert Knowledge Ontology-based Semantic Search) platform has been developed and deployed on the Web. EKOSS focuses on providing knowledge creators with intuitive and easily accessible tools for creating computer interpretable semantic statements that describe their expert knowledge based on ontologies. EKOSS also provides a set of tools for helping users search and mine the semantic statements through semantic matching and do other reasoning tasks based on the RacerPro description logics reasoner. Using EKOSS, it is hoped that repositories of semantic statements that are authored by the knowledge experts themselves but that can also be interpreted "intelligently" by computer reasoning algorithms can be realized. Initiatives to "get the ball rolling" by constructing knowledge repositories in the areas of sustainability and energy science, life sciences, and engineering failure knowledge together with preliminary analysis results of the semantic statements that have been created to date will be presented. I hope that there will be opportunity and interest for discussion of the ontologies that have been constructed for the EKOSS system, particularly in the domain of life sciences. "Science Integration Programme - Human" In April 2005, the "Science Integration Programme" was established in the University of Tokyo under Division of Project Coordination at the office of the president. The programme, which is scheduled to continue until March 2011, is a research initiative that is directed at bridging the gap between science and humanity by integrating different fields of natural science across scales and domains. In particular, the programme aims to establish new fields of study that target societal needs such as environmental problems and life science phenomena that defy solution through application of knowledge from individual fields of science or that show difficulties in grasping the overall picture. Within the overall framework of the Science Integration Programme, the Science Integration Programme - Human seeks to develop methods and examples that show the effectiveness of the scientific integration approach in a clearly understandable form for the domain life sciences. In particular, research is directed towards clarifying the structure of knowledge as well as the similarities and differences in the behaviors of systems in life sciences with special focus on human biology from genome to organism level phenomena. Based on this work, the programme aims to devise a framework for uniform description of several cross-sections of life science knowledge, including metabolic and signaling pathways, evolutionary science, human behavioral science, and brain science.
DATE2008-03-14
TIME16:10:00
PLACEB22, Llandinam


TITLE

Semantic Web: The Story So Far

SPEAKERProf. Ian Horrocks Home Page : Computing Laboratory : University of Oxford : OUCL-seminar.ppt : OUCL-seminar.pdf :
PROFILE 
ABSTRACTThe goal of Semantic Web research is to transform the Web from a linked document repository into a distributed knowledge base and application platform, thus allowing the vast range of available information and services to be more effectively exploited by software agents. As a first step in this transformation, languages such as RDF and OWL have been developed. These languages are designed to provide for the exchange of both data and conceptual schemas (AKA ontologies). Although fully realising the Semantic Web still seems some way off, OWL has already been very successful, and has rapidly become a de facto standard for ontology development in fields as diverse as geography, geology, astronomy, agriculture, defence and the life sciences. An important factor in this success has been OWL's basis in (description) logic, and the availability of sophisticated tools with built in reasoning support. The increasingly widespread use of OWL has motivated a large international research effort in areas such as scalability, expressive power, ontology engineering, and integration with other KR paradigms. In this talk I will sketch the history of semantic web research, focussing mainly on the OWL ontology language. I will discuss the use of Description Logic reasoning in ontology engineering, present some illustrative examples of OWL applications, and conclude with a survey of some recent research in the area.
DATE2008-02-25
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Life, death & computer science

SPEAKERProf. Harold Thimbleby Home Page : Department of Computer Science : Swansea University :
PROFILEHarold Thimbleby is Director, Future Interaction Technology Lab, Swansea University, a visiting professor at UCL and Middlesex University, and emeritus Professor of Geometry, Gresham College. He was a Royal Society-Wolfson Research Merit Award holder, and was awarded the BCS Wilkes Medal. His most recent book, on user interface design and programming, Press On: Principles of Interaction Programming, was published by MIT Press, 2007.
ABSTRACTWe graduate lots of computer scientists, but somehow many everyday devices are very badly designed and programmed, with sometimes disastrous results. This talk explores a well-documented death caused in part by bad program design; the talk then concentrates on compiling, specifically as applied to medical devices and calculators, and shows thatb elementary compiling techniques could drastically improve the usability and reliability of safety critical devices -- if only we treated their user interfaces as seriously as we do the syntax, semantics and performance of programming languages.
DATE2008-02-18
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Optical 3D Reconstruction from Underwater Exploration

SPEAKERDr. Joaquim Salvi (Visiting Professor) Home Page : (University of Girona) : Ocean Systems Laboratory : School of Engineering and Physical Sciences : Heriot-Watt University : Slides :
PROFILEJoaquim Salvi was graduated in computer science from the Technical University of Catalonia (Spain) in 1993, and received the D.E.A. degree in computer science in July 1996, and the Ph.D. degree in industrial engineering in 1998, from the Computer Vision and Robotics Group, University of Girona (Spain). Dr. Salvi received the Best Thesis Award in engineering for his Ph.D. dissertation. He is an associate professor with the Electronics, Computer Engineering and Automation Department, University of Girona. He is involved in some governmental projects and technology transfer projects. His current interests are in the field of computer vision and mobile robotics, focused on structured light, stereovision, and camera calibration. He is the leader of the 3-D Perception Lab. Joaquim Salvi is currently a visiting scholar at the Ocean Systems Lab, Heriot-Watt University (Scotland) where he is researching in 3D computer vision applied to the navigation of underwater robots.
ABSTRACTThe talk is about a new technique to reconstruct large 3D scenes from a sequence of video images by combining the benefits of Bayesian Filtering techniques and state-of-art 3D computer vision, two disciplines that unfortunately have seen little convergence in air and underwater scenarios. The proposed approach performs the alignment of sequences of 3D partial reconstructions of the scene using the navigation of the vehicle (position and velocity) and a Simultaneous Localization and Mapping (SLAM) approach. After a pre-processing stage to denoise and enhance the input images, partial 3D scenes are obtained using stereo techniques. Landmarks are then extracted and characterized using a combination of 2D and 3D features. A linear Kalman Filter is used to perform SLAM. Experimental results show examples of image enhancement of underwater images in poor visibility; the reconstruction of a man-made object from a ground truth sequence; and the reconstruction of large scale 3D seabeds using Simultaneous Localization and Mapping of the vehicle in a virtual scenario. Results are readily applicable to land and air robotics. Just now I am dealing with the processing of real data to reconstruct the seabed of Loch Linnhe, Scotland. Hope to get results on time to show in the presentation.
DATE2008-01-28
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Testing Web Applications for Vulnerabilities

SPEAKERGareth Bowker Ambersail :
PROFILEGareth Bowker works for Ambersail, a company specialising in network and Web Application security.
ABSTRACTGareth will be demonstrating some of the techniques he uses to test for web application vulnerabilities including SQL injection and cross site scripting. He will also be presenting some case studies from recent contracts.
DATE2007-12-07
TIME14:10:00
PLACERoom 320, Physical Sciences B


TITLE

Principles Of Visualization Design: The Good, The Bad and The Ugly

SPEAKERDr Jeremy Walton Home Page : NAG Ltd :
PROFILEDr Jeremy Walton is a Senior Technical Consultant at NAG Ltd, with responsibility for the company's activities in visualization, particularly involving IRIS Explorer, NAG's visualization toolkit. His activities include application development, user support and training, technical marketing and visualization consulting. Jeremy is the leader of ADVISE, a DTI-funded research project in visualization and analysis, and has previously led NAG's contributions to the UK e-Science projects gViz and climateprediction.net. He has given numerous presentations and technical talks on NAG's work in visualization, besides contributing to several articles and papers in this field.

Before joining NAG in 1993, Jeremy was the leader of the visualization activity at BP Research, consulting for all parts of the BP Group. From 1984 to 1985, he was a post-doctoral researcher at Cornell University, working on the molecular simulation of adsorption. He holds a D.Phil in "Statistical Mechanics Of Liquids" from the University of Oxford, and a first class honours degree in Chemistry from Imperial College London.
ABSTRACTUsing visualization packages to turn numerical data into pictures can lead to better understanding, but only if the image is a good representation of the data. So what makes one visualization better than another? Although hard and fast rules probably can't be established for all circumstances, the consideration of several examples (good, bad and ugly) ought to lead to some general principles that could be applied when designing a visualization. This talk will attempt to elucidate them.
DATE2007-12-03
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLE

Agile Technologies

SPEAKERDan Abel
PROFILEDan Abel is a Consultant/Developer for ThoughtWorks. He was an undergraduate at Aberystwyth from 1992 to 1996. Dan has been cutting code in teams for 12 years - he has worked and run teams on a range of projects from a multilingual airline website to investment banking services and has worked in groups that range in size from two to fifty.
ABSTRACTEven when an Agile project fails, it can still be valuable. This talk uses real-world examples to show how each business benefited, and how the agile practices used on the projects were honed in retrospect.
DATE2007-11-26
TIME16:10:00
PLACEPhysics Lecture Theatre B