Abstracts introduction



Download 0,85 Mb.
Page1/16
Date conversion26.05.2017
Size0,85 Mb.
  1   2   3   4   5   6   7   8   9   ...   16
ABSTRACTS

Introduction
The following pages list all the abstracts for papers to be given at the conference. They are group by Stream and are listed in the date/time order in which they appear in the overall timetable. Please remember that some streams are split over more than one day.
Each abstract listing shows the date, time and location of the talk, and has the abstract code which links it back to the At A Glance Timetable which appears earlier in this handbook.
This year, in an attempt to make it easier for delegates to select relevant and accessible papers, each submitting author was asked three questions. The questions and their range of answers were:
What is the nature of your talk?

  • Very practical

  • Practical

  • A mix of practical and theoretical

  • Theoretical

  • Very theoretical

Does your talk require prior knowledge of the subject area?


Is your talk accessible and relevant to Practitioners?



  • Not at all

  • Somewhat

  • Relevant

  • Very

  • Highly

The three answers to these questions are listed after each abstract.


We have used these answers to identify talks of particular relevance to practitioners. These talks are marked (P) in the Full Timetable and in the abstract listing.
We hope this innovation helps you to select the talks best suited to your needs.

Analytics

Organiser: Nigel Phillips


09/09/2014 : 11:30 : Room Windsor 0.05

Code: OR56A1443

The Use of Behavioural and Social Data in Predictive Analytics for Consumer and SME Credit
Mr Alan Hambrook
(Zoral Limited)

The session will focus on the use of behavioural, social and unstructured data and their impact on predictive modelling. It will use consumer and SME finance examples to illustrate. Illustrations will cover a range of areas including, credit risk, real time underwriting, fraud detection/prevention. Techniques used and metrics resulting will be shown using a number of completed, case studies. The results show that the impact on core business metrics is significant, in a number of cases, surpassing conventional techniques. The talk will also cover related issues including: sourcing data, automating/monitoring data quality, identifying predictive

vectors, maintaining predictive quality.

What is the nature of your talk?: Practical

Does your talk require prior knowledge of the subject area?: A little

Is your talk accessible and relevant to Practitioners?: Highly




09/09/2014 : 13:30 : Room Windsor 0.05

Code: OR56A1363

Lessons from Clustering Tiny Data

Mr Nigel Phillips (London South Bank University)

The volumes of data available to organisations are already immense and growing exponentially. This big data resource represents a huge potential for knowledge discovery, but also presents many challenges which require new tools to discover meaningful patterns. Where relationships are already known supervised learning approaches such as neural networks, genetic algorithms and Bayesian reasoning can be used to efficiently search and process data, but the potential for greatest gains may lie in identifying weak signal - relationships that are outside our current models. Unsupervised learning offers potential for such novel disruptive discovery however there are a number of features of unstructured data that are likely to impede success. Two such impediments are noise – texts that are part of the corpus but effectively content free and the presence of input signals that dominate the more interesting weak signals. These challenges are explored using small (590) corpus of short (typically 200-300 word) text. This approach has proved illuminating and a number of key design issues are highlighted and some techniques for improving the detection of weak signals are

evaluated.

What is the nature of your talk?: Practical

Does your talk require prior knowledge of the subject area?: None

Is your talk accessible and relevant to Practitioners?: Very





09/09/2014 : 14:00 : Room Windsor 0.05

Code: OR56A1422

From Business Intelligence to Predictive Analytics to Cognitive Analytics

Mr Matthew Robinson (IBM)

All industries have similar and pressing needs to increase visibility and control over progressively more detailed aspects of their operations. The increasing breadth and capability of technology, both hardware and software, and the integration between operational systems means that embedding analytics to drive better decisions is a must have - simple reports and dashboards don't offer the actionable insight needed. Combining monitoring (Business Intelligence) and actionable insight (Predictive Analytics) allows organisations to take control of the many touch points between consumers and systems, often in real-time. Cognitive computing systems overcome the challenge of creating, integrating and managing unstructured, analogue-text data through a massively parallel processing system. This, in turn, enables the system to evolve response guidelines and policies as new material is added to the data set. To rise to the challenge of a landscape of operational complexity organisations are moving their analytics from traditional statistics, through predictive models, into cognitive analytics. This talk addresses the key levers to successful use of analytics in organisations, including case studies, and considers the emerging area of cognitive computing systems applied to the field of analytics.


What is the nature of your talk?: Practical

Does your talk require prior knowledge of the subject area?: A little

Is your talk accessible and relevant to Practitioners?: Highly




09/09/2014 : 14:30 : Room Windsor 0.05

Code: OR56A1338

Risk Management Strategies for Finding Universal Portfolios

Dr Esther Mohr (University of Mannheim)

We consider an on-line version of the portfolio selection problem, and present two algorithms that achieve almost the same wealth as the best-constant rebalanced portfolio (BCRP) computed in hindsight. A portfolio is called universal if it achieves asymptotically the same wealth as BCRP under complete independence from statistical assumptions. Existing universal portfolio algorithms do not consider trading risk. The ability of successfully utilizing a portfolio selection algorithm in practice however requires the possibility to include risk management. Our two algorithms take into account the trading risk by the maximum possible return fluctuation. By means of competitive analysis we obtain upper bounds on the worst-case performance of our algorithms. These bounds equal the bound obtained by Cover's Universal Portfolio algorithm (UP) which is basically unimprovable. Numerical results using data from the NYSE during a 22-year period show that our algorithms are able to beat existing on-line portfolio selection algorithms, including UP, as well as BCRP in terms of risk-adjusted performance.


What is the nature of your talk?: A mix of practical and theoretical

Does your talk require prior knowledge of the subject area?: A little

Is your talk accessible and relevant to Practitioners?: Relevant




09/09/2014 : 15:30 : Room Windsor 0.05

Code: OR56A1271

Modelling Operational Risk using Skew t-copulas via Bayesian Inference

Miss Betty Johanna Garzon Rozo and Prof Jonathan Crook (University of Edinburgh)

Operational risk losses are heavy tailed and are likely to be asymmetric and extremely dependent among business lines/event types. We propose a new methodology to assess, in a multivariate way, the asymmetry and extreme dependence between severities, and to calculate the capital for Operational Risk. This methodology simultaneously uses extreme value theory and the skew t-copula. The former to model the loss severities more precisely; the latter to effectively model asymmetry and extreme dependence in high dimensions. The paper analyses an update data set, SAS Global Operational Risk Data.


What is the nature of your talk?: Practical

Does your talk require prior knowledge of the subject area?: Some

Is your talk accessible and relevant to Practitioners?: Highly




09/09/2014 : 16:00 : Room Windsor 0.05

Code: OR56A1434

Analytics Applied to Busting Financial Crime in Real Time

Dr Ana Costa e Silva and Mr Alvaro Prendes Ramos (TIBCO Spotfire)

Existing financial crime solutions suffer from two problems: many false positives and long investigation times. At TIBCO, we propose predictive modelling to solve the first and Spotfire for the latter. Our models have a supervised and an unsupervised learning component. Spotfire/TERR help users configure these. Supervised learning requires a list of past 0s and 1s, i.e. past transactions of which we know some are fraudulent and some are not. With this, any discriminative model (e.g. random forest) can be trained to optimally detect past fraud patterns going forwards. However, a 0/1 list is not always available; and it will be affected by past undetections; and fraudsters are creative, once a technique does not work, they will think of another. This brings us to unsupervised learning: PCA (principal component analysis) captures the most significant patterns within the data into less variables, which we use for clustering. We then measure the distance of each transaction to the global mean or to the cluster centres to order transactions from oddest to least-odd. Once we publish these two models into the TERR server, Streambase can use them to compute the risk/oddness of each transaction in real-time. Rules for which actions to take when spotting one, e.g. email the police, can be set in Business Events. Dangerous transactions will be investigated by humans, whose decision can be made maximally efficient using a Spotfire template that collects all information about the transaction’s history from disparate sources. Investigators can then complete a FormVine report, including their 0/1 vote, which gets stored in a centralised database. In time, these new 0/1s get fed back into the supervised learning algorithm, allowing the system to improve itself over time. The system is applicable to several sorts of financial crime, e.g. trade surveillance, online commerce, or AML.


What is the nature of your talk?: Practical

Does your talk require prior knowledge of the subject area?: None

Is your talk accessible and relevant to Practitioners?: Highly




09/09/2014 : 16:30 : Room Windsor 0.05

Code: OR56A1262

Data Science: Best Practice & Governance in Analytics

Ms Sayara Beg (Datanut)

Data Science: Best Practice and Governance in Analytics – will describe how the Analytics, and the recent Big data revolution, has given rise to the new role of the 'Data Scientist'. It will explore core elements such as expertise, knowledge of tools and interpersonal skills that are expected from a Data Scientist today, and how those core elements have evolved over time. It will conclude with why the need for best practice, ethic and governance has now become immediately urgent and how this urgency can be addressed


What is the nature of your talk?: A mix of practical and theoretical

Does your talk require prior knowledge of the subject area?: Some

Is your talk accessible and relevant to Practitioners?: Very




11/09/2014 : 09:00 : Room Windsor 0.05

Code: OR56A1366

Quality ‘v’ Quantity: An Analytics Perspective of Academic Assessment in Quantitative

Methods
Dr Harry Venables (University of York)

University students often faced with word count limits for essay, report and dissertation writing can experience high levels of anxiety. When management or business students have to write reports on quantitative studies, stress levels are often further exacerbated due to writing about techniques and skills that they may already find difficult to grasp. These students often feel that they need an increase of the upper word limit. This paper is a case study based on observations that arose from an assessment for a second year management undergraduate analytics module. This paper aims to apply analytics techniques similar to those used by Squawka in their analyses of footballer performance. The objective for this paper is to identify if word count limits have any impact on the overall mark given for a report style assessment. The purpose for giving word limits to students is not to restrict their production of work, but to allow them to focus and concentrate on the important issues of an assessment. This paper touches on the age-old teaching anecdotal concept of “Quality ‘v’ Quantity”. The analysis conducted indicates that this is indeed measurable and valid. Furthermore, by allowing students to write outside the set word limits does not necessarily mean they will obtain a higher mark, even if they think that they will getter a better for expanding their work, they are more likely to gain an insignificant improvement if any. Consequently, students who were panicking just prior to submission and given a small word limit expansion probably gained nothing more than peace of mind to relax

their anxieties.
What is the nature of your talk?: Practical

Does your talk require prior knowledge of the subject area?: A little

Is your talk accessible and relevant to Practitioners?: Highly


11/09/2014 : 09:30 : Room Windsor 0.05

Code: OR56A1298

Journal of the Operational Research Society: Analysis and Geographical Mapping based on Web of Science data

Dr Nei Soma and Dr Alexandre Alves, (ITA / Brazil) and Prof Horacio Yanasse (UNIFESP/Brazil)

Journal of the Operational Research Society is the longest journal covering Operational Research areas. This paper analyses and maps the content of JORS to a period of 58 years considering data from the Web of Science. We map the geographical distribution and the relationships of the authors of articles that published in JORS considering the institution of their affiliation and the geographical distribution of authors that cited articles published in it. In addition, we present the articles’ keywords and the authors’ area in the form of word clouds. With these maps we can verify and analyse the coverage of JORS.


What is the nature of your talk?: A mix of practical and theoretical

Does your talk require prior knowledge of the subject area?: A little

Is your talk accessible and relevant to Practitioners?: Very




11/09/2014 : 11:30 : Room Windsor 0.05

Code: OR56A1267

Morphological Distance as an Enabler to Refine Morphological Analysis Solutions

Mr Bruce Garvey and Prof Peter Childs (Imperial College London) and Dr Nasir Hussain (Strategy

Foresight Partnership LLP)

A common argument against the use of Morphological Analysis (MA), when addressing multi-dimensional problems, is that the total number of configurations generated, can be unmanageable. Software has helped mitigate this conundrum when used in conjunction with pair-wise analysis of parametric states. However, use of this software can still leave the modeller with large numbers of viable configurations to analyse. By resurrecting Robert Ayres’ concept of Morphological Distance (MD), and using it as a follow-on process once the pair-wise analysis has been conducted, the remaining configurations are more meaningfully classified into three segments. Ayres (1969) categorises these segments as Occupied Territory, representing current state of the art, where minimal innovation is likely to occur. Secondly the Perimeter Zone, where viable configurations differ in a number of states from those in the Occupied Territory, and reflect some level of innovation from existing art. Finally Terra Incognita consisting of those configurations differing from existing art in 4 parameter states or more. Given the distance from solutions in the Occupied Territory, Terra Incognita solutions are likely to be truly creative. Ayres’ original approach was to use MD as a first stage reduction process in the absence at the time of computerised pair-wise analysis. In this paper the authors will show by way of examples, how web-based software not only generates reduced configurations within the solution space but allows for the clustering of those configurations in the three segments as determined by the Morphological Distance. This refines the number of workable solutions and provides improved screening for product and technology designers involved in ideation.


What is the nature of your talk?: A mix of practical and theoretical

Does your talk require prior knowledge of the subject area?: Some

Is your talk accessible and relevant to Practitioners?: Highly




11/09/2014 : 12:00 : Room Windsor 0.05

Code: OR56A1419

Tensor Analysis of Interactions between Hyper-Heuristic Components

Mr Shahriar Asta and Dr Ender Özcan (University of Nottingham)

Hyper-heuristics are automated search methodologies that control and generate (meta)heuristics for solving computationally hard optimization problems. Achieving a high level of generality by supporting applicability to multiple problem domains and reusability are two key goals in a hyper-heuristic design. Hence, the use of data science techniques, particularly machine learning is crucial. In this study, we present a tensor based approach for designing a hyper-heuristic. The data collected during the hyper-heuristic search process is represented as a high dimensional tensor for further processing in a pre-learning stage. Our study shows that, using tensor analysis of such data results in an improved performance of the overall method being used. Moreover, we will demonstrate that the proposed approach generalizes well across different problem domains without requiring any change.


What is the nature of your talk?: A mix of practical and theoretical

Does your talk require prior knowledge of the subject area?: Quite a lot

Is your talk accessible and relevant to Practitioners?: Relevant


Behavioural Operational Research
  1   2   3   4   5   6   7   8   9   ...   16


The database is protected by copyright ©sckool.org 2016
send message

    Main page