Seminars Recent

The Department has regular research seminars given by internal and prominent external speakers. They are open to all members of the University and other interested parties. The individual research groups also run seminars and group meetings. Details of these can be found on research-group web pages.

TITLEThe first 10 years in industry, selling software, and the emerging Big Data market
SPEAKERAdam Fowler Blog : MarkLogic :
PROFILE

Adam Fowler is a Senior Pre-Sales Engineer with MarkLogic Corporation. In the last ten years since graduating with a degree in Computer Science from Aberystwyth he has worked as a developer, dev team co-ordinator, and pre-sales engineer for a variety of small and large companies. These include Universities like Aberystwyth and Derby, and companies like FileNet, IBM and edge IPK. This has included working in Financial Services, Insurance, and the Public Sector, working with large partner SIs, and selling software across Europe, North America and South Africa.

ABSTRACT

Adam will start with a history of his first 10 years in industry, the highs and lows, covering small companies and large multi nationals. Then he will cover the Pre-Sales role and how to sell software generally, relating this to the types of activity a pre-sales engineer needs to carry out - and importantly why a pre-sales engineer is different to sales person. These sections include a few funny stories, including what not to ask a hotel concierge for! After this he will cover the software hype cycle - how software emerges in a new market, becomes popular, and then commoditised. He will finish by talking about Big Data, what it actually means, and the software that is currently trying to solve these issues - including Hadoop, NoSQL software and how OpenSource relates to Enterprise software vendors like MarkLogic. He will also briefly detail an academic challenge that MarkLogic is launching, that if you enter a project for could help pay toward your studies, and have you present your project to our customers. Hopefully this will give Computer Science students an idea of what trends will be waiting for them in industry, and open them up to Pre-Sales as a career option.

DATE2012-12-03
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLEAutonomous boat control: working with a naval architect
SPEAKERDr Paul Miller, US Naval Academy
PROFILEPaul is an Associate Professor of Naval Architecture at the United States Naval Academy. He received his doctorate at the University of California at Berkeley, where he also spent far too much time sailing. He has helped design over 70 vessels, only two of which have sunk, and has worked in autonomous vessels for the last six years.
ABSTRACTControlling an autonomous surface vessel is a challenge, not only is the ocean an ever-changing and moving surface, naval architects often try to design vessels to make them difficult to control. This seminar will present many of the challenges in autonomous vessel control and will give tips so that those designing the control system and those designing the hull can live in harmony.
DATE2012-11-26
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLEMan, mouse and meaning; Semantic approaches to the exploitation of multi-species phenotype data
SPEAKERDr. Paul Schofield Cambridge (Department) : Home Page :
PROFILE
ABSTRACT

The collection of huge volumes of complex and deep, formally annotated, phenotype data from both systematic mutagenesis and hypothesis-driven studies using model organisms such as mouse and zebrafish, is being increasingly complemented by the formalisation of clinical phenotype annotation using the recently developed Human Phenotype ontology. The problems of comparing the similarity of phenotypes within and between species, especially for the effects of specific mutations, has been largely solved with a new approach to phenotype decomposition using species-agnostic ontologies, such as the Gene Ontology, and the Phenotype and trait ontology, which permit the relationship between phenotypes and diseases to be quantified and used for computational analysis. The application of this new approach will be illustrated with reference to the analysis of the human Mendelian overgrowth disorders and to the determination of the contribution to pathogenicity of genes contained within regions of human copy number (CNV) variation.

DATE2012-11-19
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLERelating Theory and Practice in Laboratory Work: A Variation Theoretical Study
SPEAKERAnna Eckerdal Home Page :
PROFILEAnna Eckerdal is a Lecturer at the Department of Information Technology, Uppsala University, where she teaches programming courses. Anna holds a M.S. in Science Education and a Ph.D. in Computer Science with specialization in Computer Science Education (Uppsala University). Her research interests include how novice students learn to program, Threshold Concepts in Computer Science, and Self-directed learning related to Computer Science. Currently Anna has a 3 years research grant from the Swedish Research Council.
ABSTRACTComputer programming education has practice oriented as well as theory oriented learning goals. Here, lab work plays an important role in supporting students' learning. It is however widely reported that many students face great difficulties in learning the theory as well as the practice, despite great efforts during many decades to improve programming education. This paper investigates the important but problematic relation between learning of theory and learning of practice for novice computer programming students. Theory is here discussed in terms of concepts, while practice is discussed as common programming activities students learn in the lab. Based on two empirical studies it is argued that there exists a mutual and complex dependency between learning of concepts and learning of practice. It is hard to learn one without the other, and either of them can become an obstacle that hinders further learning.
DATE2012-11-12
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLEA generalized risk approach to path inference based on hidden Markov models Alexey Koloydenko, Royal Holloway, (Joint work with Jüri Lember, Tartu University, Estonia)
SPEAKERAlexey Koloydenko
PROFILE
ABSTRACT

Motivated by the unceasing interest in hidden Markov models (HMMs), we re-examines hidden path inference in these models, using a risk-based framework. While the most common maximum a posteriori (MAP)/Viterbi path estimator and the minimum error/Posterior Decoder (PD) have long been around, other path estimators, or decoders, have been either only hinted at or applied more recently and in dedicated applications.

Over a decade ago, however, a family of algorithmically defined decoders aiming to hybridize the two standard ones was proposed by Brushe et. al. This and other previously proposed approaches will be shown to have various serious problems, and we will mention some practical resolutions of those.

Furthermore, simple modifications of the classical criteria for hidden path recognition will be shown to lead to a new class of decoders. Dynamic programming algorithms to compute these decoders in the usual forward-backward manner will be presented.

A particularly interesting subclass of such estimators can be also viewed as hybrids of the MAP and PD estimators. Similar to previously proposed MAP-PD hybrids, the new class is parameterized by a small number of tunable parameters. Unlike their algorithmic predecessors, the new risk-based decoders are more clearly interpretable, and, most importantly, work ``out-of-the box'' in practice, which is demonstrated on some real bioinformatics tasks and data. Some further generalizations and applications will be discussed in conclusion.

DATE2012-10-29
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLEThe Dendritic Cell Algorithm: Review and Evolution
SPEAKERDr Julie Greensmith University of Nottingham :
PROFILEDr Julie Greensmith did her undergraduate degree in Pharmacology with lots of computing courses on the side, and then a masters in Bioinformatics at Leeds. She spent some time as an intern in some fancy company (HP Labs), before taking up a PhD position at the University of Nottingham. This was followed by postdoctoral research and now a lectureship. In her spare time she enjoys lion taming (despite a cat allergy), brass banding, had an album peak at number 10634 on itunes and is proud to have recently obtained Sabatier sponsorship for her knife juggling act. Ok, so the bit about knife juggling isn’t true but the rest of it is. "
ABSTRACTThe DCA is the newest of the mainstream artificial immune system algorithms. It is a data fusion and classification algorithm used primarily for two class classification or anomaly detection problems. Its unique property is its combination of classification, filtering, signal processing and correlation functions. It has been applied to a number of real world applications including computer network security, autonomous robotics, embedded systems and to standard machine learning datasets. Its main advantage over other similar algorithms is its lightweight approach to data processing (linear worst case run time) and its ability to process data in near to real time. The DCA is inspired by the dendritic cells of the human immune system. Specifically, the DCA is based on an abstract model of the maturation process of natural DCs and based on Matzinger's 'danger theory'. In the algorithm, a population of artificial DCs are created and each cell is presented with signal data. The algorithm performs correlation between signals and antigen to classify the antigen data into normal or anomalous classes. The major criticisms of the DCA to date have included the fact that if a single DC is used then the system function equates computationally to a filtered linear classifier. Additionally the mapping process between domain and signal/antigen requires a considerable amount of expert knowledge. Thirdly, numerous variants of the DCA now exist, all with slightly different setups, parameter makeups and different data mapping processing. This also includes recent hybrid fuzzy DCA which further confuses the issue. As part of this talk I will examine examples of DCA applications to date, in addition to presenting a clear definition of the most recent deterministic DCA. The evolution of the DCA is presented, from its initial conception as an abstract model through to an applied algorithm. As part of this research I also present conjecture as to the next steps for the development of this unconventional algorithm.
DATE2012-10-15
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLEEngineering Challenges in Regenerative Medicine
SPEAKERProfessor Zhanfeng Cui, PhD, DSc Home Page : Institute :
PROFILEProfessor Cui is the Donald Pollock professor of Chemical Engineering at Oxford University. His research lies on the interface areas between Chemical Engineering, life science and membrane technology. He is the Founding Director of the Oxford Centre for Tissue Engineering and Bioprocessing, serves on the Research Councils as both committee and panel member (BBSRC, EPSRC, MRC, CCLRC) for grant reviews, and sits in the Editoral Board of several relevant journals (Journal of Membrane Science, Food and Bioproduct Processing, Patents in Biotechnology, Patents in Engineering, China Particuology, Science (China), Chinese Journal of Antibiotics, Chinese Journal of Biomechanics, etc).
ABSTRACT

Regenerative medicine aims at developing new therapies and treatment for the currently non-curable diseases and conditions, and is a fast growing field in research, development and commercialisation. It mainly follows two inter-related approaches, tissue engineering and stem cell therapy. Regenerative medicine needs multidisciplinary effort involving physical and life scientists, engineers and clinicians.

Engineering plays an important role in the translation of regenerative medicine from laboratory to hospital bedside. In this presentation, examples of critical contribution of engineers will be discussed, including scale up or scale out, bioprocessing, control of stem cell differentiation, quality control etc. A specific example is the prediction of stem cell differentiation, where novel information technologies and computing, such as data mining and classification, can make a significant impact. The outcome can potentially save a lot of experimental effort and hence cost in time and money.

DATE2012-10-01
TIME16:10:00
PLACETo be decided


TITLEIntelligent Data Analysis: Issues and Opportunities
SPEAKERProfessor Xiaohui Liu Home Page : Brunel University SISCM :
PROFILEXiaohui Liu is Professor of Computing at Brunel University where he directs the Centre for Intelligent Data Analysis, conducting interdisciplinary research involving artificial intelligence, dynamic systems, human-computer interaction, and statistical pattern recognition.
ABSTRACTIntelligent Data Analysis is needed to address the interdisciplinary challenges concerned with the effective analysis of data. In this talk, I will look into some of the key issues as well as opportunities in modern data analysis, in particular, how to ensure that quality data are obtained for analysis, to meet challenges in modelling dynamics, to handle human factors with care, as well as to consider all these when analysing complex systems. Examples in biology, finance, medicine, and security will be drawn from work carried out at Brunel and elsewhere.
DATE2012-05-14
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE"Changing the World with Intelligent Algorithms"
SPEAKERDoug Aberdeen
PROFILEDoug Aberdeen wasn't sure what he really wanted to do until he hit his 5th (!) year of undergrad. Somehow he was accepted for graduate studies and all the other job possibilities seemed boring. But then, after a Ph.D and a few years as a post-doc in machine learning, he was itching to find out what the "real world" was like. The rest of the story is in the talk.
ABSTRACT

Algorithms have already changed the world and are continuing to do so. From the exploits of British cryptographers during WWII, through to the algorithms driving modern search engines, the common theme is the smart application of clever algorithms to help people. I'm going to try and illustrate by example how Google is carrying on this story and how today's students can continue it. To begin, I'll talk a bit about the founding of Google and PageRank. To finish, I'll talk about my personal experiences developing and deploying algorithms for Gmail's spam detection and the Priority Inbox. I'll also talk about how it's one thing to develop an algorithm that works for an individual, but something different to make it work for millions of users.

This talk is aimed at a broad audience from first years through to staff interested in machine learning.

DATE2012-05-04
TIME14:00:00
PLACEC22 Hugh Owen


TITLEAn Evolutionary Simulation Approach to Extravagant Honesty
SPEAKERProfessor Seth Bullock Home Page :
PROFILEProf Seth Bullock is a leading UK complexity science researcher at the University of Southampton. He is a founding member of the Agents, Interaction and Complexity group within the School of Electronics and Computer Science and is Director of the University’s Institute for Complex Systems Simulation (www.icss.soton.ac.uk). His research takes place at the intersection between complexity science, biological modelling, and artificial intelligence. Recent research activities include leading the EPSRC Resilient Futures project which explores the resilience of future infrastrucure to terrorist attack and extreme weather events, and the EPSRC Care Life Cycle project which explores how the supply of and demand for health care and social care will be effected by demographic change. He served as Conference Chair for the 11th International Conference on Artificial Life on its first visit to Europe, has published in journals spanning health, economics, biology, computing, architecture, geosciences and physics, and was the only physical scientist invited to contribute to Richard Dawkins’ OUP festschrift.
ABSTRACT

Given their "selfish" genes, it is remarkable that biological creatures pay any attention to the displays, advertisements, threats, warnings, etc., directed at them, and equally remarkable that so many of these signals are produced in the first place. Moreover, many of these biological signals appear to be needlessly extravagant in terms of the energy spent, time taken, and risks incurred in producing them.

At first sight, for instance, it difficult to understand why peacocks persist in constructing and maintaining tails that are a significant and, to the disinterested observer, irrational drain on resources.
Might the same information not be conveyed through a stable signalling system employing much cheaper signals?

In this talk I will present an evolutionary simulation approach to answering such questions. Here, multiple artificial signalling systems are allwoed to compete with one another over evolutionary time, and where more than one signalling system is viable, the models explan why the more extravagant signalling systems will tend to be favoured by evolution.

DATE2012-04-30
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLEDigital Histology - From Microscopes to Computer Analysis and Visualisation
SPEAKERDr Derek Magee School of Computing Leeds University : Personal Web :
PROFILEDerek's research is based on the practical application of model based machine vision in domains such as agriculture, traffic monitoring and medical image analysis. His particular interest is in statistical and logical modelling of the spatial and temporal characteristics of such domains.
ABSTRACTOnce upon a time in a hospital not far from you histopathologists used microscopes to examine pieces of tissue extracted from the human body to diagnose disease. In fact not much has changed (yet!). However, there is a better way that involves digitising stained tissue samples at very high resolution. In addition to facilitating useful things such as digital storage and transmission, this affords the opportunity for computer scientists to get involved and have some fun. We can apply all the image analysis and visualisation techniques that we've developed for digital radiology to this domain. Additionally, it introduces new complications as the images are huge, colour, 2D only, and are often used in conjunction with other imaging modalities (e.g. MRI or other Histopathology images with different chemical stains). This talk will discuss some of the ongoing work in Leeds on Image analysis, 3D histopathology and novel interfaces.
DATE2012-03-26
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLENudge to Shape Physical Activity Behaviour with Personal Mobile Devices
SPEAKERDr. Parisa Eslambolchilar Home Page : Swansea University - CompSci :
PROFILEDr. Parisa Eslambolchilar is a lecturer in the FIT Lab at Swansea University since March 2007. Her research interests are in the area of dynamic, continuous interaction with small computing appliances, multimodal interaction, human-computer interaction (HCI) with medical devices, and persuasive technologies. She has run two successful international workshops on the subject of Persuasion, Nudge, Influence and Coercion using Ubiquitous Technologies in conjunction with CHI and Mobile HCI conferences. She has been an International Programme Committee member in UbiComp 2011 and Pervasive Health 2012 conferences. She is also chairing Mobile HCI 2012 workshops. She has given many public talks at the SONY Computer Science Research Institute (Paris), SHARP research lab (Oxford), and Knowledge Media Institute (Milton Keynes). She is a co-investigator in, ``Healthy interactive systems'': Resilient, Usable and Appropriate Systems in Healthcare, EPSRC-funded platform grant (ref EP/G003971). Also, she is a Co-Investigator (leading researcher in Swansea) in an EPSRC funded project EP/H006966/1 ``CHARM: Digital technology and interfaces: shaping consumer behaviour by informing conceptions of 'normal' practice”.
ABSTRACTThe aim of this talk is to provide a focal point for research and technology dedicated to persuasion and influence. Patterns of consumption such as drinking and smoking are shaped by the taken-for-granted practices of everyday life. However, these practices are not fixed and `immensely malleable'. Consequently, it is important to understand how the habits of everyday life change and evolve. Our decisions are inevitably influenced by how the choices are presented. Therefore, it is legitimate to deliberately `nudge' people's behaviour in order to improve their lives. Mobile devices can play a significant role in shaping normal practices in three distinct ways: (1) they facilitate the capture of information at the right time and place; (2) they provide non-invasive and cost effective methods for communicating personalised data that compare individual performance with relevant social group performance; and (3) social network sites running on the device facilitate communication of personalised data that relate to the participant's self-defined community. In this talk I will be particularly focusing on persuasive technologies available for shaping physical activity behaviour on mobile platforms including the bActive application developed through the CHARM project.
DATE2012-03-12
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE"From robots that dance to moles that need to be whacked: taking programming off the screen, out of the lab and back into the consciousness of our kids."
SPEAKERChris Martin Home Page : University of Dundee ( Computing ) :
PROFILE

Chris is a researcher in applied computing working in the School of Computing, University of Dundee, Scotland. Chris is involved in research, teaching and outreach in the School of Computing As well as the conventional computing of programming on a variety of platforms he is interested in how we make technology fit the people it's crafted for. Utilising tools such as focus groups, live theatre and ethnographic techniques , he is often not surprised to discover that the richness and complexity of the people we work and design for far outweigh the complexity of the technology we seek to construct.

In particular and the focus of his ongoing PhD research is an interest in computer programming and how programmers support themselves in solving problems. Where a programmer may be a senior analyst in large software development company or a primary school child first discovering that sequence, decision and repetition can make a robot dance. Can grounding the abstract components of a computer program in a physical device improve success in programming...?

ABSTRACTIn this talk I will share my experiences and aspirations of recapturing the imagination of school kids and keeping them hungry for computing once we have them on our courses. Three topics will mix & resonate together. Outreach activities: robot dance, the bug catcher challenge and dance of creative code (work in progress). Level one teaching: physical computing and data visualisation, what can be achieved with two semesters and two enabling technologies. Finally the PhD (ongoing) building the evidence based case for these ideas and experimental methods I employ.
DATE2012-03-05
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLERobot Traders & Flash Crashes: WTF?
SPEAKERProfessor Dave Cliff Home Page : University of Bristol - Faculty of Engineering :
PROFILEDave Cliff is a professor of computer science at the University of Bristol and has previously worked as an academic at the Universities of Sussex and Southampton in the UK, and at MIT in the USA. He has also worked as a research scientist for Hewlett-Packard Labs and as a Director/Trader for Deutsche Bank's Foreign-Exchange Complex Risk Desk. Since 2005 he has served as Director of a £15m national research and training initiative addressing issues in the science and engineering of Large-Scale Complex IT Systems (LSCITS: see www.lscits.org). He is author on approx 100 academic publications, and inventor on 15 patents. In 1996 he iinvented one of the first adaptive autonomous algorithmic trading systems applicable to financial markets, which in 2001 was demonstrated by IBM to outperform human traders. He is currently serving as one of the group of eight experts leading the UK Government's "Foresight" investigation into the future of computer trading in the financial markets, a two-year project run by the Government Office for Science.
ABSTRACTIn the past decade, the global financial markets have become very heavily dependent on automated trading systems where computer systems perform trading jobs that were previously done by humans. Automated trading systems can now perform at truly superhuman levels, integrating vast amounts of data and reacting at split-second speeds that no human trader could ever match. The mix of human traders and automated systems, and the planetary interconnectedness of various major trading exchanges, mean that the global financial markets are now a single ultra-large-scale socio-technical system, built from risky technology. Various events in the past 18 months have served to highlight that the global financial system may now be less resilient, and more vulnerable to sudden severe failures, than it has ever been in the past. In this lecture I will talk about how we got to where we are, what the current problems are, what's likely to happen next, and what might be done to make things bette
DATE2012-02-27
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLETaming Schrödinger's cat
SPEAKERDaniel Burgarth
PROFILEDaniel Burgarth has recently joined IMAPS as a lecturer. Previously, he held an EPSRC Fellowship in Theoretical Physics at Imperial College, London. His research interests include the dynamics of quantum many-body systems, control theory and quantum information.
ABSTRACTThe last decades have seen a paradigm shift in our view of quantum theory. While formerly the wave-like nature of quantum mechanics was mostly considered a blurry, noise-like phenomena, it is now known that it can be a powerful resource for computations. Roughly speaking, nature is at its most fundamental level uncertain. When a quantum computer is challenged with a fundamentally uncertain input, it provides in some sense all possible answers simultaneously, which gives rise to its extraordinary power. To use this power we have to learn how to tame Schrödinger's cat. In this lecture I will give a very basic and introductory introduction to quantum computing and its current challenges.
DATE2012-02-20
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE"Meta-Morphogenesis of information processing in biological evolution, learning, development, and culture."
SPEAKERProfessor Aaron Sloman
PROFILEAfter a BSc in Mathematics and Physics at Cape Town in 1956, he went to Oxford, intending to continue mathematics, but was seduced by philosophy and obtained a DPhil in Philosophy of Mathematics (1962). Taught philosophy at Hull for two years, then from 1964 to 1991 at Sussex University, except for a year in Edinburgh, 1972-3. Encountered AI around 1969, and decided that philosophical progress required designing increasingly complex working fragments of minds of many kinds -- a very long, slow process. Published 'The Computer Revolution in Philosophy' in 1978[1] Helped to develop AI/Cognitive Science teaching and research and formation of COGS (Cognitive and Computing Sciences) at Sussex, and contributed to development and management of Poplog, a toolkit supporting teaching and research in AI.[2] Moved to Birmingham in 1991, continuing with interdisciplinary research in philosophy of mind, of mathematics, science, of language; AI and tools for research and teaching in AI; theories of development and evolution, including development and evolution of architectures, forms of representation, control mechanisms, visual processing, and reasoning, especially the role of environment in convergent evolution.[3] Elected Fellow of AAAI, AISB and ECAI. Hon DSc Sussex 2006. Currently retired, but doing research full time.

[1] http://www.cs.bham.ac.uk/research/projects/cogaff/crp/
[2] http://www.cs.bham.ac.uk/research/projects/poplog/freepoplog.html
[3] http://www.cs.bham.ac.uk/~axs/my-doings.html

ABSTRACT

Much of Turing's work was about how large numbers of relatively simple processes could cumulatively produce qualitatively new large scale results e.g. Turing-machine operations producing results comparable to results of human mathematical reasoning, and micro-interactions in physico-chemical structures producing global transformations as a fertilized egg becomes an animal or plant. In the same spirit, this talk presents some aspects of a draft theory of "meta-morphogenesis": processes and mechanisms involved in interactions between changing environments, changing animal morphologies, changing information processing capabilities and changing mechanisms for producing all these changes.

"Informed control", is a core feature of all life, starting with control of various kinds of physical behaviour, then later also informed control of information-processing in individuals, groups of individuals in one or more species, and in larger more abstract systems. By understanding the varied pressures leading to these changes and the many and varied results of such changes, we can gain new insights into issues addressed in a variety of disciplines, including computer science, AI/Robotics, cognitive science, neuroscience, psychology, psychiatry, linguistics, philosophy and education. I'll try to show (if time permits) how some of what we have learnt about types of information processing system in the last half century or so can illuminate philosophically puzzling features of animal minds, including the existence of "qualia", minds with mathematical intelligence, and the roles of precursors of human language required for perception, motivation, planning, plan execution, learning, and later development of languages for communication. This should also enhance our still incomplete understanding of requirements for future machines rivalling biological intelligence. One of many implications is the short-sightedness of some current theories of "embodied" cognition.

DATE2012-02-13
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLEComputational Modelling of Tumour Growth
SPEAKERDr Matthew Hubbard School of Computing, University of Leeds : Home Page :
PROFILEMatthew is a Senior Lecturer in the School of Computing, University of Leeds, which he joined in September 2000. Prior to this he spent a number of years doing postdoctoral research, first in the Department of Mathematics at the University of Reading (where he also did his Ph.D.), and then in DAMTP at Cambridge University. He also has a couple of years of industrial experience, having worked for British Aerospace immediately after graduating from his first degree. His research areas are generally in the area of Scientific Computing and Computational Fluid Dynamics, but the main focus is on creating numerical methods which naturally retain the properties of the underlying partial differential equations.
ABSTRACTTumour growth is a complex and poorly understood process which is open to analysis by a range of mathematical and computational tools. These tools can be used to provide insight in to the biological processes which might cause patterns of behaviour seen in vitro and in vivo, leading ultimately to a model which can be used to simulate tumour growth.
This talk will consist of two parts. First, I will introduce some of the fundamental issues relating to mathematical and computational modelling, placing particular emphasis on biomedical processes. The second part of the talk will then describe a specific computational model of vascular tumour growth, developed in collaboration with Prof Helen Byrne of the University of Oxford. Computational simulations will be shown which demonstrate this model's ability to reproduce classical tumour structures.
DATE2012-01-30
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE"Making the most of the HEA"
SPEAKERDr. Mark Ratcliffe
PROFILE
ABSTRACTMark will talk about how departments can take full advantage of the HEA, in terms of available funding, workshops, conferences etc. Everyone at Aberystwyth could benefit financially in one way or another. He will also talk more broadly about work on employability, particularly in regard to how we can maximise student take-up of industrial placements.
DATE2012-01-23
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLE"Diagnosing system failures in 2015 : What happens when systems with 13000 cpu's and 64 TB of memory go wrong."
SPEAKERDr Clive King
Senior Staff Engineer
PROFILEDr Clive King is a Senior Staff Engineer in Oracle Solaris Revenue Product Engineering. His main focus is on the diagnosis of complex stack performance, data integrity and availability issues on large enterprise class systems. He also likes to fix the underlying process problems so something similar does not happen again.

He has worked in Sun, now Oracle for 14 years. Previously he worked for Cray and at Aberystwyth University where he also gained a PhD in the area of Distributed Systems. He is B.C.S. Fellow, a member of the B.C.S. Accreditation Panel, an I.S.E.B. Chief Examiner and a PhD examiner.
ABSTRACT

In 1990, a typical Sun system had a single 10mhz cpu and 4MB of memory and might have run in the region of 30-50 processes. Today Oracle ships systems with 512 cpu's and 4TB of memory and such systems have a few

100,000 processes at most. The roadmap suggests that by 2015 this will rise to cpu counts around 13,000 and 150TB of memory to serve workloads in excess of 1 million processes.

Like C.S.I., when a system fails a post-mortem is required. A crash dump is the body, an image of the system memory at the time of failure. This talk looks at technical, logistical and tools challenges of diagnosing system failure after the fact when the body is 4TB in size and the challenges of scaling post-mortem failure diagnosis to ever larger configurations.

DATE2011-11-28
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLEVisual Exploration of Time Series: the Multi-dimensional Data Challenge
SPEAKERDr Rita Borgo Home Page : Swansea University - Computer Science :
PROFILE
ABSTRACT

Interactive exploration of time series data has to face challenges coming from the increasing of both data size and richness of carried information.

A third parallel issue, of particular relevance to visualization, comes from the inherent human limitations to process large amount of information, aspect which seriously constrains visual display of data.

These three factors currently dominate the design of new visualization solutions to support the exploration process of time-series data in search of interesting features and trends./

In this talk we will present an overview of work developed at Swansea University to tackle these challenges.

Three majour results in the fields of video visualization and remote sensing data analysis will be presented together with some future directions and open questions.

DATE2011-11-14
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLEThe Danger of Software Patents
SPEAKERDr Richard Stallman
President
Free Software Foundation Free Software Foundation : E-mail :
PROFILERichard Stallman launched the free software movement in 1983 and started the development of the GNU operating system (see www.gnu.org) in 1984. GNU is free software: everyone has the freedom to copy it and redistribute it, as well as to make changes either large or small. The GNU/Linux system, basically the GNU operating system with Linux added, is used on tens of millions of computers today. Stallman has received the ACM Grace Hopper Award, a MacArthur Foundation fellowship, the Electronic Frontier Foundation's Pioneer Award, and the the Takeda Award for Social/Economic Betterment, as well as several honorary doctorates.
ABSTRACTRichard Stallman will explain how software patents obstruct software development. Software patents are patents that cover software ideas. They restrict the development of software, so that every design decision brings a risk of getting sued. Patents in other fields restrict factories, but software patents restrict every computer user. Economic research shows that they even retard progress.
DATE2011-10-31
TIME16:10:00
PLACEArts Centre Theatre


TITLEMultiple Criteria Decision Making and Systems Design
SPEAKERProfessor Peter Fleming Home Page : University of Sheffield : E-Mail :
PROFILEPeter Fleming is Professor of Industrial Systems and Control in the Department of Automatic Control and Systems Engineering and Director of the Rolls-Royce University Technology Centre for Control and Systems Engineering at the University of Sheffield, UK. His control and systems engineering research interests include control system design, system health monitoring, multi-criteria decision-making, optimisation and scheduling, and applications of e-Science. He has over 400 research publications, including six books, and his research interests have led to the development of close links with a variety of industries in sectors such as automotive, aerospace, energy, food processing, pharmaceuticals and manufacturing. He is a Fellow of the Royal Academy of Engineering, a Fellow of the International Federation of Automatic Control, a Fellow of the Institution of Engineering Technology, a Fellow of the Institute of Measurement and Control, and is Editor-in-Chief of International Journal of Systems Science.
ABSTRACTDesign problems arising in control and systems can often be conveniently formulated as multi-criteria decision-making problems. Inevitably, these problems often comprise a relatively large number of criteria. Many-objective optimisation poses difficulties for multiobjective optimisation algorithms which have been designed to solve problems with two or three objectives and alternative approaches for addressing many objectives will be described. Through close association with designers in industry, a range of machine learning tools and associated techniques have been devised to address the special requirements of many-criteria decision-making. These include visualisation and analysis tools to aid the identification of conflicting and non-conflicting criteria, interactive preference articulation techniques to assist in interrogating the search region of interest and methods for exploring design options for cases where constraints may be relaxed or tightened. Industrial design exercises will demonstrate these approaches.
DATE2011-10-03
TIME16:10:00
PLACEPhysical Sciences Lecture Theatre B


TITLETowards a 'language' of facial expressions - from cognition to computation
SPEAKERDR. Christian Wallraven E-Mail : Institution :
PROFILE
ABSTRACTThe face is capable of producing an astonishing variety of movements ranging from larger scale head movements to minute muscle twitches that are barely visible. Equally astonishing are the perceptual and cognitive processes with which we humans decode these signals in order to identify someone's particular smile, read the mood a person is in, or detect whether a comment was meant seriously or ironically. In both the cognitive and the computational sciences, however, the focus of research has been largely on the so-called universal expressions - expressions such as anger and fear which carry strong emotional contents and which are commonly identified across cultures. While important, in daily life, these universal expressions occur relatively rarely with conversational and communicative facial expressions such as slight smiles, or bored faces being much more common. Incidentally, these expressions are usually also much more subtle in terms of the facial movements making them much harder to detect and process computationally, for example. In this talk, I will describe our recent research in two areas: first, a summary of cross-cultural studies investigating the perceptual and cognitive processes in decoding of complex facial expressions, and, second, a brief introduction into the research in collaboration with Cardiff University in which we attempt to model and manipulate facial performances during long conversations.
DATE2011-07-14
TIME11:10:00
PLACEB20 Llandinham Building


TITLEImage-Based Biomedical Modeling, Simulation and Visualization
SPEAKERChuck Hansen Scientific Computing and Imaging Institute University of Utah : Home Page :
PROFILE
ABSTRACTIncreasingly, biomedical researchers need to build functional computer models from images (MRI, CT, EM, etc.). The "pipeline" for building such computer models includes image analysis (segmentation, registration, filtering), geometric modeling (surface and volume mesh generation), large-scale simulation (parallel computing, GPUs), large-scale visualization and evaluation (uncertainty, error). In my presentation, I will present research challenges and software tools for image-based biomedical modeling, simulation and visualization and discuss their application for solving important research and clinical problems in neuroscience, cardiology, and genetics.
DATE2011-06-17
TIME12:00:00
PLACEPhysical Sciences Lecture Theatre B


TITLEBiology becomes Data Intensive

The Challenges of Data Integration for Systems Biologists

SPEAKERChris Rawlings, Bioinformatics and Biomathematics, Rothamsted Research
PROFILE
ABSTRACT

Biology is rapidly being re-shaped as a data-intensive science as biologists are faced with ever increasing challenges from both the scale and complexity of the data being generated by transformational technologies such as next generation genome sequencing techniques.

Furthermore, the adoption of systems approaches to biological research to address some of the grand challenges in medicine and agriculture require many diverse types of complex data to be brought together in ways that were not previously envisaged. These challenges bring data integration techniques to the fore as one of the unsolved problems for

Bioinformatics. In this seminar I will introduce the open source graph-based Ondex data integration and visualisation system that we have been developing at Rothamsted and show examples of how it has been used in a range of systems biology projects by solving practical problems in syntactic and semantic data integration.

DATE2011-06-08
TIME13:00:00
PLACEPhysical Sciences Lecture Theatre A


TITLEPaper, Geometry and Money
SPEAKERProfessor Roger Boyle Home Page : School of Computing, University of Leeds :
PROFILE
ABSTRACTWe present an overview of three current or recent projects at Leeds. The study of paper by codicologsts and papyrologists has many motivations; often the material is rare, delicate and very “difficult” to analyse and see. We consider a particular pair of case studies of archaic Arabic material that has yielded to a model based attack, succeeded by statistically enhanced template matching. Many visual surveillance applications deploy single uncalibrated cameras over uncalibrated scenes; recapture of any 3D information then requires some constraint to be enforced. We have used expectations about the speed distributions in congested scenes to perform this task in a novel manner. Undergraduates have skills that they often freelance, and the demand for these often exceeds supply. We describe a model for matching skills and requirements that remunerates, and provides CV-fodder in a manner that benefits students, employers and the host university.
DATE2011-03-28
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEICT Disaster Recovery at Aberystwyth University
SPEAKERTim Davies IS Home Page :
PROFILEAssistant Director: ICT and Customer Services Information Services, Aberystwyth University
ABSTRACTThe talk will cover the importance of having a DR plan, and testing / training. We will cover Information Services' "Disaster Day" - why we do it and the benefits. Furthermore, it will delve into some of the technologies and methods that IS use to achieve service continuity.
DATE2011-03-14
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLERecent Advances in Biometrics, including Gait and Ear
SPEAKERProfessor Mark Nixon School of Electronics and Computer Science : Home Page :
PROFILEMark Nixon is the Professor in Computer Vision at the University of Southampton UK. His research interests are in image processing and computer vision. His team develops new techniques for static and moving shape extraction which have found application in automatic face and automatic gait recognition and in medical image analysis. His team were early workers in face recognition, later came to pioneer gait recognition and more recently joined the pioneers of ear biometrics. Amongst research contracts, he was Principal Investigator with John Carter on the DARPA supported project Automatic Gait Recognition for Human ID at a Distance. His vision textbook, co-written with Alberto Aguado, Feature Extraction and Image Processing (Academic Press) reached 2nd Edition in 2008. With Tieniu Tan and Rama Chellappa, their book Human ID based on Gait was published in 2005. Prof. Nixon is a member of the IEEE and Fellow IET and FIAPR.
ABSTRACTBiometrics concerns recognising people automatically by personal characteristics. By computer vision, biometrics can identify people whilst enjoying the advantages of data acquisition without subject contact (or cooperation). The non invasive biometric of greatest interest is automatic face recognition and this has led to practical deployment. Others have been developing too: people can be recognised by the way they walk, their gait, and by their ear. There are even approaches which rely on human description, co-related to video information. These approaches are then suited to applications beyond access control: they can be deployed and refined for surveillance, and there is emergent interest in their deployment in forensics. This talk will survey the state-of-art in these approaches and consider ways in which the UK can benefit from deployment not just as security mechanisms, but also their wider deployment to aid and abet society’s progress.
DATE2011-02-28
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEPostgraduate/Research Opportunities within the Department of Computer Science
SPEAKERRepresentatives of Departmental Research Groups
PROFILE
ABSTRACTIf you are interested in studying for an Advanced MSc, MPhil or PhD degree, then this talk is for you. There will be a short overview of what it means to study for these degrees, and issues such as period of study, money, University Competition for funding etc. will be mentioned. The remainder of the seminar will consist of talks from the four departmental Research Groups. They will present examples of current research within their groups and provide a flavour of possible future research projects that you might be interested in. Our four research groups are: Advanced Reasoning; Computational Biology; Intelligent Robotics and the Vision, Graphics and Visualisation group.
DATE2011-02-07
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLELinux: Evolving A Complex System On The Fly
SPEAKERAlan Cox Linux, Intel :
PROFILE
ABSTRACT

The Linux kernel grows at over a line a minute, and a line of code changes every fifteen seconds. Releases are done about quarterly and there is no separate long term development codebase.

This seminar explores the history of the kernel development process and how over a thousand people with no formal management structure continually re-engineer a complex system.

DATE2011-01-31
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEContradiction and Inconsistency in Fuzzy Sets
SPEAKERProfessor Chris Hinde Computer Science, Loughborough University :
PROFILEProfile Page
ABSTRACTFuzzy sets are useful for modelling vague concepts, in 1983 Atanassov introduced Intuitionistic Fuzzy Sets based on membership and non-membership values. The evidence supporting these is in the form of elimination of possibilities as used in fuzzy sets and also the support for non-possibilities to derive the non-membership function. There is the possibility of contradiction arising from these two functions and a logic for processing contradictory fuzzy sets is developed. The existence of contradiction and inconsistency is usually regarded as something to be avoided but both can be used to derive knowledge about the world on the assumption that the real world is neither contradictory nor inconsistent.
DATE2010-12-06
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEEvil on the Internet
SPEAKERDr Richard Clayton Home Page :
PROFILEDr Richard Clayton is a security researcher in the Computer Laboratory of the University of Cambridge. He's been studying wickedness on the Internet for years; be it spam, denial of service attacks (intentional and inadvertent), and particularly phishing, the use of fake bank webpages to steal credentials -- and later all of your money.
He is currently collaborating with the National Physical Laboratory (NPL), and spending half his time in Teddington, on a project that will develop robust and accurate measurements of Internet security mechanisms.
ABSTRACTThere's a lot of evil things on the Internet if you know where to look for them. Phishing websites collect banking credentials; mule recruitment websites entice people into money laundering; fake escrow sites defraud the winners of online auctions; fake banks hold the cash for fake African dictators; and there are even Ponzi scheme websites where (almost) everyone knows that they're a scam. This talk will show you live examples of these sites, explain how they work, and tell you what little we currently know about the criminals who operate them.
DATE2010-11-22
TIME16:10:00
PLACEPhysics Lecture Room 320


TITLENon-Stationary Fuzzy Reasoning in Clinical Decision Support
SPEAKERDr Jon Garibaldi Intelligent Modelling and Analysis Research Group, School of Computer Science, University of Nottingham :
PROFILEProfile Page
ABSTRACTFuzzy sets were introduced by Zadeh in the 1960s, and were subsequently expanded into a complete systematic framework for dealing with uncertainty. As part of the generic fuzzy methodologies, fuzzy inference systems were proposed for the modelling of human reasoning with uncertain data and knowledge. However, standard fuzzy sets and fuzzy reasoning do not model the variability in decision making that is typically exhibited by all human experts in any domain. Variation may occur among the decisions of a panel of human experts (inter-expert variability), as well as in the decisions of an individual expert over time (intra-expert variability).
Dr Garibaldi has introduced the concept of non-stationary fuzzy sets, in which small changes (perturbations) are introduced in the membership functions associated with the linguistic terms of a fuzzy inference system. These small changes mean that each time a fuzzy inference system is run with the same data, a different result is obtained. It is straight-forward to extend this notion to create an ensemble fuzzy inference system featuring non-stationary fuzzy sets. In this talk (aimed at an audience not completely familiar with fuzzy methods), non-stationary fuzzy sets and reasoning will be explained in detail, and its use in several real-world scenarios of decision support in medical contexts will be described. Results will be presented to demonstrate the benefits of non-stationary fuzzy reasoning.
DATE2010-11-08
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLERobotic Exploration Challenges: 2010 to 2030 and beyond
SPEAKERProf Dave Barnes Home Page : Department Page :
PROFILEDave Barnes is Professor of Space & Planetary Robotics and has been active in robotics research for over 25 years. He is a member of the STFC Aurora Advisory Committee (AurAC), and the STFC Particle Physics, Astronomy and Nuclear Physics Science (PPAN) committee. He was a member of the 2003 Beagle 2 Mars lander consortium with responsibilities for the calibration of the Beagle 2 robot ARM, and for generating a virtual Beagle 2 model for rehearsals, planning and ARM commanding during the Mars mission. He was a member of the EADS Astrium Ltd. led team for the ESA ExoMars Rover Phase A Study. He is a Co-I on the ExoMars Panoramic Camera (PanCam) team for the 2018 mission, and is a member of the ESA Cosmic Vision Marco Polo mission team.
ABSTRACTThe use of robots for planetary exploration will create many new challenges over the coming decades. NASA is planning a number of Mars missions such as the Mars Science Laboratory (MSL) which is scheduled for launch in 2011. Building upon the successful Mars Express mission, ESA and NASA are working to fly new missions to Mars such as the ExoMars rover in 2018, and an eventual Mars Sample Return (MSR) mission. With each new mission greater demands are placed upon planetary robotics know-how, and new challenges have begun to emerge. The demand for greater science return and reduced operation costs is moving planetary robotics towards greater autonomy. Future planetary robots will need to travel further and faster, and conduct opportunistic science on the way!

In addition to navigation, autonomous localisation and autonomous scientific sample acquisition will be required. Scientists want to go to new locations on Mars where current wheeled locomotion is impractical, and aerobot technology is becoming a real possibility for future missions. Issues such as planetary robot survivability and longevity are key research areas that need to be addressed. As always these challenges are set against the `ideal' engineering requirement for zero mass, zero power and zero volume. This presentation will focus upon the future challenges for planetary robotic exploration from 2010 to 2030 and beyond.

After the seminar there will be

Refreshments at 5:30pm followed by BCS AGM and talk at 6pm

DATE2010-10-25
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEThe Automatic Identification of Non-Growing Follicles in Human Ovaries
SPEAKERTom Kelsey Home Page : School of Computer Science University of St Andrews :
PROFILE
ABSTRACTThe human ovary contains a fixed number of non-growing follicles (NGF) established before birth; this number declines with increasing age culminating in the menopause at 50-51 years on average. NGF populations are estimated using a standard methodology: the ovary is fixed, thin slices (5-20 micrometres) are taken at regular intervals, and these are stained with hematoxylin and eosin (HE). Sample regions are photographed, with the NGFs appearing in these images counted by hand.
Assuming an even distribution of NGFs throughout the ovary, the population is estimated by integration. This process is time consuming, and suffers from human mis-classification, integration error due to small sample sizes, and the inconsistent assumption of even distribution. In this talk I present a combined histological and automatic feature detection approach, leading to reduced human and sampling errors at low magnification and which can, in principle, be used to obtain almost exact NGF populations from fully sectioned ovaries.
DATE2010-04-26
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEMOEA/D and RM-MEDA: Two Recent Multiobjective Optimization Methods
SPEAKERProfessor Qingfu Zhang Home Page : School of Computer Science & Electronic Engineering, University of Essex :
PROFILE
ABSTRACTMultiobjective Evolutionary Algorithms (MOEA) are one of the hottest topics in the area of evolutionary computation. A multiobjective optimisation problem (MOP) may have many, or even infinite Pareto optimal solutions. MOEAs aim at finding a number of well-representative Pareto solutions for a decision maker. Most current MOEAs do not take advantage of the results in traditional mathematical programming. MOEA/D and RM-MEDA are two very recent MOEAs, developed at Essex, which uses ideas from traditional optimisation methods. In this talk, I will explain the motivations, ideas, and main steps of these two methods, and show you some experimental results.
DATE2010-03-22
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEComputing for the Future of the Planet
SPEAKERProfessor Andy Hopper, CBE, FREng, FRS Home Page : The Computer Laboratory, University of Cambridge :
PROFILE
ABSTRACTDigital technology is becoming an indispensable and crucial component of our lives, society, and environment. A framework for computing in the context of problems facing the planet will be presented. The framework has a number of goals: an optimal digital infrastructure, sensing and optimising with a global world model, reliably predicting and reacting to our environment, and digital alternatives to physical activities.
DATE2010-03-15
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEAutomatic Fault Detection for Autosub6000
SPEAKERDr Richard Dearden Home Page : School of Computer Science, The University of Birmingham :
PROFILERichard Dearden is a Senior Lecturer at the University of Birmingham. His research interests are in the area of reasoning under uncertainty, including wirk in planning, fault diagnosis, robotics and other aspects of statistical AI. He is PI for three current projects, AFDA, a NERC-funded project to add fault diagnosis to an underwater vehicle, CogX, an FP7-funded cognitive robotics project, and GeRT, also EU-funded, on generalising robot programs to learn planning operators. Previously he was at NASA Ames Research Center, where he led the Model-Based Diagnosis and Recovery Group. His Ph.D. in 1999 was on planning and reinforcement learning in uncertain worlds, from the University of British Columbia.
ABSTRACTState estimation and fault detection are important components of robotic systems. A number of approaches have been applied to the problem, but in recent years there have been significant successes for model-based approaches. In this talk I will describe two model-based diagnosis techniques, Livingstone 2, a discrete consistency-based approach and a hybrid system approach based on particle filtering. We are using both approaches as part of AFDA (Automated Fault Detection for Autosub6000), a three year NERC-funded project to provide fault detection technology for a deep-diving autonomous underwater vehicle operated by the National Oceanographic Centre. As well as describing the approaches, I will also discuss how we are applying them to this project.
DATE2010-03-08
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEBridging the gap between Formal and Computational Semantics
SPEAKERProfessor Stephen Pulman Home Page : Oxford University Computing Laboratory :
PROFILE
ABSTRACTNow that reasonably accurate wide-coverage parsers are available, it should be possible to increase semantic coverage of sentences too.

The literature in formal linguistic semantics contains a wealth of fine grained and detailed analyses of many linguistic phenomena. But very little of this work has found its way into implementations, despite a widespread feeling (among linguists at least) that this can't be very difficult: just fix a grammar to produce the right logical forms and hook them up to a theorem prover. In this talk I take a representative analysis of adjectival comparatives and ask what steps one would have to go through so as to use this analysis in a computational setting like open domain question-answering. I then try to identify some general conclusions that can be drawn from this exercise.
DATE2010-03-01
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEBiologically Inspired Robotics: Neuromechanics and Control
SPEAKERDr. Ravi Vaidyanathan, Bristol Robotics Laboratory Home Page :
PROFILESenior Lecturer in Biodynamics
ABSTRACTComplex behavior may be viewed as an emergent phenomenon resulting from the interaction of an entity with its environment through sensory-motor activity. From a systems perspective, the dynamic morphology of a structure plays a critical computational role in this process; in effect subsuming portions of the control architecture. In animals, for example, intrinsic properties of the musculoskeletal system augment the neural stabilization of the organism for an array of critical of functions.
Invertebrates, in particular, have been able to exploit a wide range of behavioral niches because they utilize a body plan that can be modified to create functional adaptations optimized for a particular role. The talk will review basic methodologies for the enhancement of engineering (robotic) design based upon biological studies of animal behavior from a hierarchical systems perspective with emphasis on coupling between mechanics and control systems. Architectures founded upon biological inspiration will be summarized with specific examples from the speaker's work, including recent research that has been featured in New Scientist, Flight Global Magazine, The Engineer, and on television specials produced by the Discovery Channel and Tokyo Broadcasting Systems.
Applications highlighted will include medical and mobile robotic systems including the (pictured) Morphing Micro Air-Land Vehicle and the Bristol Hand Rehabilitation Robot.
DATE2010-02-22
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLETowards Robust Autonomy in 'Large Worlds'
SPEAKERDr Subramanian Ramamoorthy Home Page :
PROFILESubramanian Ramamoorthy is a Lecturer in the School of Informatics at The University of Edinburgh. Within the School, he is associated with the Institute of Perception, Action and Behaviour and the Informatics Life Sciences Institute.
Prior to that, he was in the Intelligent Robotics Lab at The Univeraity of Texas at Austin, where he received his Ph.D. His research is centred on sequential decision problems involving complex dynamical systems, motivated by applications in robotics and autonomous agent design, addressed using a combination of techniques from machine learning and mathematical systems theory. In addition to his academic experience, he has spent several years in various research and development groups at National Instruments Corp., working on algorithmic tools for motion control, computer vision and dynamic simulation.
ABSTRACTOne of my long term research goals is to create autonomous robots, e.g., humanoids, that are capable of functioning effectively in "large worlds", i.e., in environments with significant structural and quantitative uncertainty. Along this path, the core technical questions that must be addressed are those of sequential decision making. What makes these problems challenging is the confluence of two issues: (a) How could one encode dynamically dexterous behaviours in these high-dimensional constrained nonlinear dynamical systems so that the corresponding decision problems are tractable in online, resource-constrained settings? (b) How could an autonomous robot come to terms with a continually changing environment and task specification? I will outline a factored approach to solving these problems that involves two types of technical tools.
The problem of task encoding can be addressed by taking a geometric view of dynamics. Many interesting behaviours involving humanoid robots admit low-dimensional and abstract descriptions, e.g., all trajectories corresponding to a dynamical behaviour may be restricted to a submanifold in configuration space. I will first describe where this structure comes from, using a concrete example involving bipedal walking on irregular terrain. Then, I will describe an algorithmic procedure for learning this structure from data and utilizing it for motion synthesis, in the absence of analytical models of the system dynamics.
To the extent that a large class of interesting behaviours may be viewed in such geometric terms, one may pose the problem of coping with the changing environment and task specifications as a generic problem of adversarial navigation in these spaces. I will outline a game theoretic procedure for solving this problem, wherein the agent utilizes a set of learnt primitives to synthesize a composite strategy that constitutes the equilibrium of a game against nature.
These results represent initial steps in a larger research program, towards robust autonomous agents in large worlds. I will conclude with some remarks regarding current and future work in this direction.
DATE2009-12-07
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEAssisted Prostatic Biopsy. Problems, Perspectives and Challlenges
SPEAKERDr Robert Marti, University of Girona Home Page :
PROFILE
ABSTRACTThe talk will focus on the work developed in the recent years in prostate guided biopsy in the Medical Image Analysis lab of the VICOROB group. I will initially give an overwiew of the problem and motivation of the project. I will then present the developed methods focusing on the image fusion problem involving computer vision topics such as multi-modal rigid and non-rigid image registration, atlas based image segmentation and image reconstruction applied to ultrasound (US) and magnetic ressonance images (MRI). Finally, open challenges and future directions will be discussed.
DATE2009-11-30
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEAn introduction to Scrum: the agile project management methodology
SPEAKERGeoff Watts Inspect and Adapt : Home Page :
PROFILE
ABSTRACTThis seminar will cover the basics of the Scrum framework, the roles within a Scrum team, and their responsibilities.
DATE2009-11-19
TIME13:10:00
PLACEHugh Owen C22


TITLEAn Introduction to Phylogenetic Networks
SPEAKERProfessor Vincent Moulton Home Page :
PROFILE
ABSTRACTIt has now been 150 years since Charles Darwin presented his theory on the origin of species, asserting that all organisms are related to one other by common descent via a “tree of life”. Since then, biologists have been able to piece together a great deal of information concerning this tree – benefiting in more recent times from the advent of ever cheaper and faster DNA sequencing technologies. Even so, it is now commonly accepted that certain organisms such as plants and viruses - including, for example, swine flu - commonly swap their genetic information, and so representing their evolution by a tree can in certain cases be somewhat misleading. In such cases phylogenetic networks can provide a useful alternative to a tree. In this talk we will present a brief introduction to phylogenetic networks, and will mention some recent results and open problems within this burgeoning area of computational biology.
DATE2009-11-09
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEInvariant Manifolds for Model Reduction
(with examples from physical and chemical kinetics)
SPEAKERProfessor Alexander N Gorban Home Page :
PROFILE
ABSTRACTFor dynamical models describing the behaviour of large-scale complex systems, one of the most powerful and rigorous approaches to model reduction is based on the notion of the system's slow invariant manifold. The theory of invariant manifolds was introduced more than a century ago through the work of two legendary figures of mathematics, Lyapunov and Poincare. It experienced intense development during the 20th century and is currently being vigorously revisited and re-examined as an important and powerful tool in applied mathematics used for mathematical modelling and model reduction purposes.

In this talk I would like go review the theory of invariance equation and application of this theory to model reduction in dissipative systems. I will try to answer the following question: How to find the slow invariant manifold? How to use an approximate slow invariant manifold for model reduction? Why should we attempt to reduce the description in the times of supercomputers? A collection of methods to derive analytically and to compute numerically the slow invariant manifolds is presented. The theory is illustrated by examples from dynamics of highly non-equilibrium gas flows and chemical reactions kinetics.
DATE2009-10-26
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLETwo new applications for stereo-based 3D surface scanning
SPEAKERProfessor Bob Fisher Home Page : rbf@inf.ed.ac.uk : School of Informatics, University of Edinburgh :
PROFILE 
ABSTRACT1) Skin cancer diagnosis The addition of 3D shape data to colour information from a dense stereo sensor improves both the segmentation and diagnosis of lesions. We apply this to several types of skin cancer that have not previously had much image analysis research.

2) Feature extraction from flying bats As part of an acoustic sonar project, we are using a 500 frame per second range sensor to observe position and shape changes of bats as they capture prey. We will describe the new sensor and show some examples of the information extracted.
DATE2009-05-04
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLECounting Computers (and figuring out what they do)
SPEAKERAndy Ormsby WWW : andy@a-ormsby.co.uk :
PROFILE 
ABSTRACTThere are something like 7,000 data centres in the US and many more around the world. In total, there are probably something like 44 million servers sitting in data centres. The numbers continue to grow quickly. By 2020, data centres are projected to collectively have a greater carbon footprint than aviation. But the companies that own and run these data centres often have difficulty in answering even the most basic questions, such as "what servers do I have?", "what applications are running in my data centre?" or perhaps more the more basic "what can I turn off?".
In this talk, I'll describe some of the technology that is used to help people discover the answers to questions like this and provide some insight into large scale IT along the way.
DATE2009-03-16
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEDigital TV, MHEG-5 and Java
SPEAKERJohn Hunt Personal Website : jeh@midmarsh.co.uk :
PROFILE 
ABSTRACTDigital TV is about to take over as the only option available for terestial TV. However, Digitial TV is about more than just watching TV, its about interacting with the TV, running applications on your Set Top Box (STB). These applications may be simple electronic programme guides, games or (via a return line) interactive server oriented e-commerce systems. However, the software used to run these systems is still developing., many STBs are now using Linux, some run Java, others still use MHEG-5, others are starting to use Adobe AIR or Flash Lite. What does this mean for the future of the humble Set Top Box, will this help to create the connected home of the future and what do MHEG-5 or Java applications look like anyway?
This talk will explore the current directions, illustrate some free to air digital applications using MHEG-5 and Java and consider where the current trajectory could take us as the STB becomes the central hub of the connected home.
DATE2009-03-02
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEHow to Build an Effective Team --- Evolving Neural Network Ensembles
SPEAKERProfessor Xin Yao Institute : Home Page : Contact Details :
PROFILE  
ABSTRACTPresvious work on evolving neural networks has focused on single neural networks. However, monolithic neural networks become too complex to train and evolve for large and complex problems. It is often better to design a collection of simpler neural networks that work collectively and cooperatively to solve a large and complex problem. The key issue here is how to design such a collection automatically so that it has the best generalisation. This talk introduces some recent work on evolving neural network ensembles, including negative correlation, constructive negative correlation and multi-objective approaches to ensemble learning.
DATE2009-02-16
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEFactorial Switching Linear Dynamical Systems for Neonatal Condition Monitoring
SPEAKERProfessor Chris Williams ckiw@inf.ed.ac.uk :
PROFILEInstitute and Home Page
ABSTRACTCondition monitoring often involves the analysis of measurements taken from a system which ``switches'' between different modes of operation in some way. Given a sequence of observations, the task is to infer which possible condition (or "switch setting") of the system is most likely at each time frame. In this paper we describe the use of factorial switching linear dynamical models for such problems. A particular advantage of this construction is that it provides a framework in which domain knowledge about the system being analysed can easily be incorporated.
We demonstrate the flexibility of this type of model by applying it to the problem of monitoring the condition of a premature baby receiving intensive care. The state of health of a baby cannot be observed directly, but different underlying factors are associated with particular patterns of measurements, e.g. in the heart rate, blood pressure and temperature. We use the model to infer the presence of two different types of factors: common, recognisable regimes (e.g. certain artifacts or common physiological phenomena), and novel patterns which are clinically significant but have unknown cause. Experimental results are given which show the developed methods to be effective on real intensive care unit monitoring data.
Joint work with John Quinn, Neil McIntosh
DATE2009-02-02
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEResearch Opportunities within the Department of Computer Science
SPEAKERLeaders of Departmental Research Groups
PROFILE 
ABSTRACTIf you are interested in studying for an MPhil or PhD degree, then this talk is for you. Prof Shen who is the Department Director of Research will present a short overview of what it means to study for these degrees, and issues such as period of study, money, University Competition for funding etc. will be mentioned. The remainder of the seminar will consist of talks from the four department Research Group Heads. They will present examples of current research within their groups and provide a flavour of possible future research projects that you might be interested in. Our four research groups are: Advanced Reasoning; Computational Biology; Intelligent Robotics and the Vision, Graphics and Visualisation group.
DATE2008-12-08
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLESome novel developments in dimensionality reduction for classification
SPEAKERDr Guido Sanguinetti Dr Guido Sanguinetti : Machine Learning Group : Department of Computer Science : University of Sheffield :
PROFILE 
ABSTRACTCommon dimensionality reduction techniques such as PCA and generalisations address the problem of finding lower dimensional representation of data based on variance considerations. However, the most varying directions need not be the most interesting: for example, if a high dimensional data set is known to contain clusters, the best dimensionality reduction will extract features that best discriminate between clusters, rather than capturing the most variance. We exploit this idea and introduce a latent variable model that extracts at maximum likelihood optimal discriminative features (in the sense of Fisher's discriminant) without access to label information. We then extend the framework to address the semi-supervised problem and possible non-linear extensions.
DATE2008-12-01
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEFreud, Ego, and Robotic Autonomy
SPEAKERDerek Smith (DSmith@uwic.ac.uk) Home Page :
PROFILEDuring the 1980s Derek Smith was with British Telecom, Cardiff, where he specialised in the design and operation of very large CA-IDMS "semantic network" databases. Since 1991 he has taught neuropsychology to Psychology and Speech and Language Therapy undergraduates. He is working currently in association with International Software Products, Toronto, on "Konrad", an artificial consciousness project using a CA-IDMS platform.
ABSTRACTIn this talk, the speaker will be discussing just how much modern computing already owes to the father of psychoanalysis, Sigmund Freud.
High on the surprisingly long list of achievements is Freud's seminal work on the modular architecture of the human lexico-semantic system and the uncanny accuracy of his predictions of real-time biological control structures, sometimes a full lifetime before the machines existed to put his ideas into practice. The talk will also highlight a number of areas where Freud still has much to offer researchers in the domains of artificial intelligence in general, and artificial consciousness in particular.
DATE2008-11-17
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEAropä and PeerWise: Supporting Student Contributed Pedagogy in Large Classes
SPEAKERJohn Hamer (J.Hamer@cs.auckland.ac.nz) Home Page :
PROFILEJohn Hamer is a Senior Lecturer in Computer Science at the University of Auckland. His research interests include how novices learn programming, and the use of student contributing pedagogies. In 2004 he developed Aropa, an award-winning tool that supports peer assessment, and which is now used by over 1,000 students each semester in a diverse range courses including Commercial Law, English, Engineering, Medical Science and Pharmacology.
ABSTRACTAropä and PeerWise are two web-based tools that support collaborative learning in large, undergraduate classes. Aropä manages peer assessment activities, allowing students to take part in double-blind refereeing of their peers' coursework. PeerWise is a data bank of multi-choice questions contributed, explained and discussed entirely by students.
These systems leverage the latent intellectual capacity of a large class to provide new opportunities for learning. Using Aropä, each student might review three or four essays and receive a corresponding amount of feedback, all within a few days. The immediacy and diversity of the feedback is substantially greater than can be produced by a tutor.
While the quality of the reviewing is typically variable, there are affective benefits in challenging students to distinguish between good and poor feedback. By eliminating the stamp of authority and introducing diverse, possibly conflicting feedback, students are required to exercise their critical judgement in deciding what information to accept and reject. Moreover, tutor marking can still be used, and can even be mixed in with the peer reviewing.
PeerWise leverages the energy of a large class in a different way, building an annotated question bank that can contain thousands of multiple-choice questions. Each question is accompanied by an explanation written by the question author, overall quality and difficult ratings assigned by students who have answered the question, and possibly a forum in which misunderstandings and possible improvements are discussed. The question bank thus serves two complementary purposes: a creative medium in which students engage in deep learning and critical reflection; and a drill-and-test library for developing fluency with the course content.
We have statistical evidence to show that active use of these tools strongly correlates with learning. Further, as a side-effect of channelling all interaction through a central database, a detailed record of student interaction is collected. This record allows instructors to monitor overall class performance and to assess individual students over time in modes that limit opportunities for plagiarism. With routine use, a rich picture of student performance is collected.
We are currently at the point of building additional tools to further exploit the interaction data. These include reputation systems, whereby the quality of an individual's comments and feedback is judged by the recipients, and recommender systems, in which participants are able to highlight instances of high quality work. Both of these ideas are present in popular online auction and shopping sites, but have not been widely adapted for educational use.
The talk will describe the Aropä and PeerWise tools, discuss the education theory behind the ideas, present results from the ongoing research study into student learning and attitudes toward the tools, and elaborate some of our ideas for future development.
DATE2008-11-03
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEModelling the growth of flowers
SPEAKERProfessor Andrew Bangham Department of Computing Sciences, University of East Anglia : A.Bangham@uea.ac.uk :
PROFILEClick to view the Professor Andrew Bangham profile
ABSTRACTThe relationship between gene expression and growth is often speculated on but here we describe an attempt to model their relationship with the help of finite element models.
DATE2008-10-20
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEA Vision for the Science of Computing
SPEAKERSir Anthony Hoare Sir Anthony Hoare : Microsoft Reseach : http://en.wikipedia.org/wiki/Tony_Hoare :
PROFILEhttp://en.wikipedia.org/wiki/Tony_Hoare
ABSTRACTI have a vision of the day when software is the most reliable component of any product or system which it controls. I have a vision of the day when software developers are regarded as the most reliable of professional engineers. Both of these visions will be advanced by development of our understanding of the basic Science of Computing and its embodiment in Design Automation tools for Software Engineering.
One of our basic topics of study in Computer Science is the computer program itself. Like other basic scientists, we address the most general and the most fundamental questions: ‘What does the program do?’ ‘How does it do it?’ ‘Why does it work?’ and ‘How do we know that the answers to all these questions are correct?’ As long as software engineers cannot answer these questions, we are unlikely to reduce the current prevalence of programming error.
For routine application in Software Engineering, the answers discovered by scientific research will be embodied in a suite of Design Automation tools, with coherent coverage of all phases in the lifetime of programs, from requirements analysis through specification, design, coding testing, delivery and subsequent maintenance and evolution. As in other branches of engineering, these tools will automate all necessary deductions and calculations, and will thereby conceal from the professional engineer the unpopular fact that the language of Science, even Computing Science, is mathematics.
The final condition for widespread acceptance of the tools is the provision of a substantial corpus of case studies of their successful application in all phases of program development. These case studies will be generalised as widely applicable design patterns for adaptation and re-use in subsequent programming projects addressing the same area of application. Initially, the case studies will be selected and constructed by the scientists themselves, and used to assess the applicability of theory and the advancement of technology of tool construction. This work has already started in many centres of research throughout the world.
In summary, the achievement of my vision will depend on a high degree of co-operation and objectively decided competition between rival and complementary branches of Computing Science. It requires an increase in the scale and ambition of our research goals which is characteristic of other mature branches of science. Do we have the courage to make such a dramatic shift in our research culture?
DATE2008-05-23
TIME14:00:00
PLACEPhysics Main Lecture Theatre


TITLE"EKOSS - a knowledge creator centered system for supporting the sharing, discovery, and integration of expert knowledge"
and
"Science Integration Programme - Human"
SPEAKERYasunori Yamamoto and Steven Kraines, University of Tokyo, Japan. EKOSS : Science Integration Program : OReFiL : Anatomography :
PROFILEThe Science Integration Programme is headed by Professor Toshihisa Takagi from the Department of Computational Biology in the Graduate School of Frontier Sciences at the University of Tokyo. The programme is currently composed of four full time faculty members, one research fellow, and one adjunct faculty member.
ABSTRACT"EKOSS - a knowledge creator centered system for supporting the sharing, discovery, and integration of expert knowledge" Leveraging recent developments in semantic web technologies and artificial intelligence, particularly web-based ontologies and logical inference reasoners, the EKOSS (Expert Knowledge Ontology-based Semantic Search) platform has been developed and deployed on the Web. EKOSS focuses on providing knowledge creators with intuitive and easily accessible tools for creating computer interpretable semantic statements that describe their expert knowledge based on ontologies. EKOSS also provides a set of tools for helping users search and mine the semantic statements through semantic matching and do other reasoning tasks based on the RacerPro description logics reasoner. Using EKOSS, it is hoped that repositories of semantic statements that are authored by the knowledge experts themselves but that can also be interpreted "intelligently" by computer reasoning algorithms can be realized. Initiatives to "get the ball rolling" by constructing knowledge repositories in the areas of sustainability and energy science, life sciences, and engineering failure knowledge together with preliminary analysis results of the semantic statements that have been created to date will be presented. I hope that there will be opportunity and interest for discussion of the ontologies that have been constructed for the EKOSS system, particularly in the domain of life sciences. "Science Integration Programme - Human" In April 2005, the "Science Integration Programme" was established in the University of Tokyo under Division of Project Coordination at the office of the president. The programme, which is scheduled to continue until March 2011, is a research initiative that is directed at bridging the gap between science and humanity by integrating different fields of natural science across scales and domains. In particular, the programme aims to establish new fields of study that target societal needs such as environmental problems and life science phenomena that defy solution through application of knowledge from individual fields of science or that show difficulties in grasping the overall picture. Within the overall framework of the Science Integration Programme, the Science Integration Programme - Human seeks to develop methods and examples that show the effectiveness of the scientific integration approach in a clearly understandable form for the domain life sciences. In particular, research is directed towards clarifying the structure of knowledge as well as the similarities and differences in the behaviors of systems in life sciences with special focus on human biology from genome to organism level phenomena. Based on this work, the programme aims to devise a framework for uniform description of several cross-sections of life science knowledge, including metabolic and signaling pathways, evolutionary science, human behavioral science, and brain science.
DATE2008-03-14
TIME16:10:00
PLACEB22, Llandinam


TITLESemantic Web: The Story So Far
SPEAKERProf. Ian Horrocks Home Page : Computing Laboratory : University of Oxford : OUCL-seminar.ppt : OUCL-seminar.pdf :
PROFILE 
ABSTRACTThe goal of Semantic Web research is to transform the Web from a linked document repository into a distributed knowledge base and application platform, thus allowing the vast range of available information and services to be more effectively exploited by software agents. As a first step in this transformation, languages such as RDF and OWL have been developed. These languages are designed to provide for the exchange of both data and conceptual schemas (AKA ontologies). Although fully realising the Semantic Web still seems some way off, OWL has already been very successful, and has rapidly become a de facto standard for ontology development in fields as diverse as geography, geology, astronomy, agriculture, defence and the life sciences. An important factor in this success has been OWL's basis in (description) logic, and the availability of sophisticated tools with built in reasoning support. The increasingly widespread use of OWL has motivated a large international research effort in areas such as scalability, expressive power, ontology engineering, and integration with other KR paradigms. In this talk I will sketch the history of semantic web research, focussing mainly on the OWL ontology language. I will discuss the use of Description Logic reasoning in ontology engineering, present some illustrative examples of OWL applications, and conclude with a survey of some recent research in the area.
DATE2008-02-25
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLELife, death & computer science
SPEAKERProf. Harold Thimbleby Home Page : Department of Computer Science : Swansea University :
PROFILEHarold Thimbleby is Director, Future Interaction Technology Lab, Swansea University, a visiting professor at UCL and Middlesex University, and emeritus Professor of Geometry, Gresham College. He was a Royal Society-Wolfson Research Merit Award holder, and was awarded the BCS Wilkes Medal. His most recent book, on user interface design and programming, Press On: Principles of Interaction Programming, was published by MIT Press, 2007.
ABSTRACTWe graduate lots of computer scientists, but somehow many everyday devices are very badly designed and programmed, with sometimes disastrous results. This talk explores a well-documented death caused in part by bad program design; the talk then concentrates on compiling, specifically as applied to medical devices and calculators, and shows thatb elementary compiling techniques could drastically improve the usability and reliability of safety critical devices -- if only we treated their user interfaces as seriously as we do the syntax, semantics and performance of programming languages.
DATE2008-02-18
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEOptical 3D Reconstruction from Underwater Exploration
SPEAKERDr. Joaquim Salvi (Visiting Professor) Home Page : (University of Girona) : Ocean Systems Laboratory : School of Engineering and Physical Sciences : Heriot-Watt University : Slides :
PROFILEJoaquim Salvi was graduated in computer science from the Technical University of Catalonia (Spain) in 1993, and received the D.E.A. degree in computer science in July 1996, and the Ph.D. degree in industrial engineering in 1998, from the Computer Vision and Robotics Group, University of Girona (Spain). Dr. Salvi received the Best Thesis Award in engineering for his Ph.D. dissertation. He is an associate professor with the Electronics, Computer Engineering and Automation Department, University of Girona. He is involved in some governmental projects and technology transfer projects. His current interests are in the field of computer vision and mobile robotics, focused on structured light, stereovision, and camera calibration. He is the leader of the 3-D Perception Lab. Joaquim Salvi is currently a visiting scholar at the Ocean Systems Lab, Heriot-Watt University (Scotland) where he is researching in 3D computer vision applied to the navigation of underwater robots.
ABSTRACTThe talk is about a new technique to reconstruct large 3D scenes from a sequence of video images by combining the benefits of Bayesian Filtering techniques and state-of-art 3D computer vision, two disciplines that unfortunately have seen little convergence in air and underwater scenarios. The proposed approach performs the alignment of sequences of 3D partial reconstructions of the scene using the navigation of the vehicle (position and velocity) and a Simultaneous Localization and Mapping (SLAM) approach. After a pre-processing stage to denoise and enhance the input images, partial 3D scenes are obtained using stereo techniques. Landmarks are then extracted and characterized using a combination of 2D and 3D features. A linear Kalman Filter is used to perform SLAM. Experimental results show examples of image enhancement of underwater images in poor visibility; the reconstruction of a man-made object from a ground truth sequence; and the reconstruction of large scale 3D seabeds using Simultaneous Localization and Mapping of the vehicle in a virtual scenario. Results are readily applicable to land and air robotics. Just now I am dealing with the processing of real data to reconstruct the seabed of Loch Linnhe, Scotland. Hope to get results on time to show in the presentation.
DATE2008-01-28
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLETesting Web Applications for Vulnerabilities
SPEAKERGareth Bowker Ambersail :
PROFILEGareth Bowker works for Ambersail, a company specialising in network and Web Application security.
ABSTRACTGareth will be demonstrating some of the techniques he uses to test for web application vulnerabilities including SQL injection and cross site scripting. He will also be presenting some case studies from recent contracts.
DATE2007-12-07
TIME14:10:00
PLACERoom 320, Physical Sciences B


TITLEPrinciples Of Visualization Design: The Good, The Bad and The Ugly
SPEAKERDr Jeremy Walton Home Page : NAG Ltd :
PROFILEDr Jeremy Walton is a Senior Technical Consultant at NAG Ltd, with responsibility for the company's activities in visualization, particularly involving IRIS Explorer, NAG's visualization toolkit. His activities include application development, user support and training, technical marketing and visualization consulting. Jeremy is the leader of ADVISE, a DTI-funded research project in visualization and analysis, and has previously led NAG's contributions to the UK e-Science projects gViz and climateprediction.net. He has given numerous presentations and technical talks on NAG's work in visualization, besides contributing to several articles and papers in this field.

Before joining NAG in 1993, Jeremy was the leader of the visualization activity at BP Research, consulting for all parts of the BP Group. From 1984 to 1985, he was a post-doctoral researcher at Cornell University, working on the molecular simulation of adsorption. He holds a D.Phil in "Statistical Mechanics Of Liquids" from the University of Oxford, and a first class honours degree in Chemistry from Imperial College London.
ABSTRACTUsing visualization packages to turn numerical data into pictures can lead to better understanding, but only if the image is a good representation of the data. So what makes one visualization better than another? Although hard and fast rules probably can't be established for all circumstances, the consideration of several examples (good, bad and ugly) ought to lead to some general principles that could be applied when designing a visualization. This talk will attempt to elucidate them.
DATE2007-12-03
TIME16:10:00
PLACEPhysics Lecture Theatre B


TITLEAgile Technologies
SPEAKERDan Abel
PROFILEDan Abel is a Consultant/Developer for ThoughtWorks. He was an undergraduate at Aberystwyth from 1992 to 1996. Dan has been cutting code in teams for 12 years - he has worked and run teams on a range of projects from a multilingual airline website to investment banking services and has worked in groups that range in size from two to fifty.
ABSTRACTEven when an Agile project fails, it can still be valuable. This talk uses real-world examples to show how each business benefited, and how the agile practices used on the projects were honed in retrospect.
DATE2007-11-26
TIME16:10:00
PLACEPhysics Lecture Theatre B