Expert Talks

Prof. Radim Burget
Signal Processing Laboratory. Department of Telecommunications, Brno University of Technology. Brno, Czech Republic, European Union

Bio: Radim Burget is an Associate Professor at Brno University of Technology and is heading Signal processing program at SIX Research Centre. He has been involved in research of artificial intelligence for many years and in plenty of research projects, which include projects funded on European level, national level or privately funded projects. Companies he is cooperating with include Honeywell, Mitsubishi Electric, Rapidminer, Konica-Minolta, Webnode and many others.

  • Title: Towards Artificial Super Intelligence
  • Abstract: This presentation is focused on artificial intelligence (AI) and information processing. First, gives a summary of the development in the field from its very beginning, it briefly mentions the most important achievements of humanity in AI and as well as it summarises the most important successes in 2018 and what was changed in that year. The lecture gives a brief introduction to so-called deep neural networks and what they are, including several animations and demonstrations showing AI capabilities. The presentation is aimed at managers & decision-makers as well as to experts and engineers working in the field. Decision-makers and managers will learn where AI is standing in 2019, where this area is expected to be going in the next years and what today capabilities of AI are. Some parts of the presentation will be aimed at experts actively working with deep neural networks and will provide tips and tricks. These tips and tricks will be inspired by the latest findings from 2018 and can help you with just little changes to achieve state-of-the-art performance. Audience will also learn how is current AI compared to human intelligence, what is hype around AI and what is not, whether we can ever reach artificial super intelligence (ASI) and whether we are close to achieve ASI or not.

Prof. Carlos M. Travieso-Gonzalez
Head of Signals and Communications Department Institute for Technological Development and Innovation in Communications (IDeTIC) University of Las Palmas de Gran Canaria (ULPGC), Spain.

Bio: Dr. Carlos M. Travieso-González is an Associate Professor at University of Las Palmas de Gran Canaria, Spain. He received the M.Sc. degree in 1997 in Telecommunication Engineering at Polytechnic University of Catalonia and Ph.D. degree in 2002 at University of Las Palmas de Gran Canaria, Spain. He was a researcher in more than 34 International and Spanish Research Projects. He is co-author of 2 books, co-editor of 7 Proceedings Book, Guest Editor for international journals and many book chapters. He has over 300 papers published in international journals and conferences and published a patent and two more is under revision.

  • Title:E-Health tools on emotional detection
  • Abstract: The physiological signals also known biosignals, most common and used for biomedical and biometric identification, are the electrocardiogram (ECG) and electroencephalogram (EEG). ECG measures the electrical activity of the heart and EEG measures the electrical activity of the brain. There are other very rarely used signals that we consider studying as part of this work. For example, the electromyogram (EMG) which is a record of the electrical activity produced by the muscles and nerves and the galvanic skin response (GSR)or skin conductance, which is an indication of psychological or physiological arousal such as fear, anger or other feelings. The detection of the degree of emotion through physiological signals is a very poorly studied area that can offer a new and efficient system, which deals with using the combination of several physiological signals as a method of identifying the degree of emotion. The objective of this proposal is to analyze the physiological signals that show people's emotions, quantify it and perform an automatic detection, which can become an innovative and robust tool that shows the degree of emotion. To implement the system, digital image processing techniques and artificial intelligence methods will be applied to obtain an objective low-cost emotion measurement system using physiological signals.

Prof. Rossi Setchi
Director of Research Centre in AI, Robotics and Human-Machine Systems (IROHMS) Deputy Head of School, Director of Research, Cardiff School of Engineering, Cardiff University, United Kingdom.

Bio: Rossi Setchi is Professor in High-Value Manufacturing, Deputy Head of School and Director of Research at the School of Engineering. She also leads the Research Centre in AI, Robotics and Human-Machine Systems at Cardiff, which is a collaboration of the Schools of Engineering, Psychology and Computer-Science and Informatics. She has a distinguished track record of research in a range of areas including AI, robotics, systems engineering, manufacturing, industrial sustainability, Cyber-Physical Systems and Industry 4.0, and, in particular, has built an international reputation for excellence in knowledge-based systems, computational semantics and human-machine systems. Over a period of 20 years, Professor Setchi has published over 220 peer-reviewed, high-quality publications and has been able to secure, with colleagues, external grant support totalling more than £22 million. She has collaborated with over 20 UK and 30 overseas universities, 15 research organisations and 30 industrial companies from more than 20 countries in Europe, Asia and Australia.

  • Title: The Future of Human-Machine Systems: Human-Like AI and Human-Centred Robotics
  • Abstract: This talk will focus on the main principles of human-like computing and human-centred robotics that will provide machines and robots with human-like perceptual, reasoning and learning abilities, and enable safe and intuitive interactions and co-existence of man and machines. The main research challenge is the need for more acceptable, understandable and explainable AI and robotic solutions. Professor Setchi will discuss the limitations of the current AI and statistical Machine Learning and the need for more advanced human-like AI, which learns from humans using a small number of examples. She will explain how computational semantics and advances in smart sensing can be used to create human-like AI and improve productivity, creativity, situational awareness, intuitive interaction and reduce the likelihood of human confusion. She will illustrate her talk with examples from several multidisciplinary projects, which investigate real-world problems in the context of human-centred and context-aware computing.

Prof. Sabah A. Jassim
Professor of Mathematics and Computation School of Computing, University of Buckingham,Buckingham MK18 1EG, United Kingdom.

Bio: Sabah Jassim has a BSc and an MSc from Baghdad University and a PhD in Mathematics from the University of Wales. He is Professor of Mathematics. He is also a visiting lecturer at City University, London, and Fachhochschule Wedel, Germany. He is Professor of Mathematics and Computation and teaches various courses at undergraduate and postgraduate level. These include Image Processing, Mathematics and Algorithms. He also supervises research students in the areas of Biometrics – Authentication, Biomedical Image Analysis and Dynamic Encryption

  • Title: Topological Data Analysis for Machine learning in Computer Vision.
  • Abstract: The availability of sophisticated image processing tools enables a variety of image manipulation without leaving visual traces. These tools are frequently exploited to tamper with image content including malicious morphing attacks on face biometrics, hiding secrets communication by steganography, and creating fake images/videos. Image analysis aim to automatically or semi-automatically distinguish different classes of images depending on the objective(s) of the specific application. Traditional image analysis schemes rely on extracting texture feature vectors (TFV) and training a variety of classifiers on these vectors or their histograms. Here we adopt the newly emerging Topological Data Analysis (TDA) paradigm that exploits changes to spatial distribution of (TFV’s). TDA and its computational tool of Persistent Homology will be shown to be highly effective in detecting a variety of barely visible tiny changes in image texture primitives as a result of malicious tampering or naturally occurring abnormalities. We present a variety of topologically sensitive TFVs that can be used to deal with such image analysis applications. We shall demonstrate applicability of TDA image analysis schemes to detect face morphing and fake video attacks, detecting decease-caused distortion of tissue scans, and as a Steganalysis tool to detect hidden secret transactions.

Prof. Hao Ying
IEEE Fellow, Professor, Department of Electrical and Computer Engineering Wayne State University, Detroit, Michigan 48202, USA

Bio: Professor Ying has published one single-author research monograph/advanced textbook entitled Fuzzy Control and Modeling: Analytical Foundations and Applications (IEEE Press, 2000, 342 pages; foreword by Professor Lotfi A. Zadeh), which contains solely his own research results. He has coauthored another book titled Introduction to Type-2 Fuzzy Logic Control: Theory and Applications (IEEE Press and John Wiley & Sons, Inc., 2014). In addition, he has published more than 100 peer-reviewed journal papers and over 150 peer-reviewed conference papers. Prof. Ying's work has been widely cited - his Google Scholar h-index is 41 (the 41 publications included in his index have by themselves generated more than 4,000 citations so far). He holds two U.S. patents. He is serving as an Associate Editor or a Member of Editorial Board for eight international journals, including the IEEE Transactions on Fuzzy Systems.

  • Title: Comparison of Machine Learning Techniques to Identify Sepsis Patients in the Emergency Department
  • Abstract: Background: Sepsis is a major health concern with substantial impact on health care. Early identification of patients with sepsis is critical to provide timely care for patients. There were reports of applying machine learning techniques for sepsis detection in the literature. These techniques produced models with varying degree of transparency. They can be transparent models such as Decision Trees, semi-transparent models such as Naïve Bayes Network, and black-box models such as Support Vector Machine (SVM) and Neural Networks. Objectives: The objective of this study is to compare the performance of several machine learning techniques to identify emergency department septic patients. In the comparison, we also included the results of the novel rule-point sepsis alert system that we developed previously. Methods: This study uses a random sample of 912 sepsis and 975 non-sepsis patients admitted to the emergency department of the Detroit Medical Center. Each case was adjudicated individually. Our rulebased model is a point-based system which assigns different weighted points to each rule. The patients were divided into 14 groups based on age and other patient variables, and each group has its own set of rules. Thresholds of the variables in each rule were optimized by a genetic algorithm (GA) to maximize sepsis detection performance. We used MATLAB Classification Learner that generated the result of 23 different machine learning models. We also used MATLAB’s Neural Network Toolbox to train and build different sizes of Neural Network models with one hidden-layer. Results: The 23 machine learning models result in a variety of performances. The Fine Gaussian SVM showed the highest sensitivity with 95.8% but with 77.7% specificity and 80.1% positive predictive value (PPV). The Coarse k-Nearest Neighborhood (KNN) had the highest specificity with 94.5% but sensitivity was at 84.9%. For balanced results with over 90% sensitivity, specificity and PPV, the Medium Gaussian SVM and Medium KNN models were at the top. The Neural Network models also varied in performances. The best performed network had 141 neurons (93.24% Sensitivity, 96.13% Specificity and 97.18% PPV), followed by 36 neurons network (94.0% Sensitivity, 96.05 Specificity and 96.41 PPV). The rule-point system resulted in 90.9% sensitivity, 90.9% specificity and 90.3% PPV. Conclusion: Among the machine learning classifiers, the Neural Network model achieved the highest detection performance. One major drawback of this model is its poor transparency and ability to explain the results to the domain experts. We found no apparent relationship between the number of neurons in the hidden layer and detection accuracy of the network. The GA-optimized rule-point sepsis alert system can provide comparable performance to the NN and SVM models

Prof. Aleksandar Poleksic
Department of Computer Science University of Northern Iowa, USA

Bio: AleksandarPoleksic, Ph.D., is an Assistant Professor in the Department of Computer Science at the University of Northern Iowa. His training and interests span the fields of computational biology and bioinformatics, with a strong emphasis on novel algorithm and application development. Prior to joining UNI, Dr. Poleksic was a Senior Scientist at Eidogen-Sertanty, Inc., where he helped develop Eidogen's computational drug discovery platform that integrates numerous algorithms in the area of protein structure determination and analysis. Dr. Poleksic was recently granted a United States patent for alignment algorithms and methodologies for rapid protein homology detection.

  • Title: Integrating biological knowledge to better predict relationships in a biological system
  • Abstract: A key problem in biological research is to discover and quantify the relationships (associations or interactions) between different entities in a complex biological system. Recent years have seen the development of computational techniques for predicting such relationships based upon the data from a specific pair of domains (e.g. predicting whether any given drug is likely to treat any given disease). However, in spite of advanced computational techniques available, the progress in achieving prediction accuracy necessary for biological discoveries has been dismal at best. We demonstrate that a better understanding of a biological system as a whole is necessary to overcome the poor methods’ performance inherit to data in any two given domains. In one approach, a biological system can be viewed as a network, consisting of different domains (such as genes, diseases, drugs, symptoms, etc.). The individual relationships between entities in any two domains (e.g. drug-treats-disease) in such a network can be predicted not only directly, but also transitively (e.g. drug-upregulates-gene-downregulates-disease). We demonstrate that a simple voting mechanism applied to different types of transitive predictions can significantly increase the accuracy obtained by the current, state-of-the-art methodologies.

Prof. Cesare Alippi
Professor (Information Processing Systems) Professor with the Politecnico di Milano, Milano, Italy and Università della Svizzera Italiana, Lugano, Switzerland.

Bio: Prof. CesareAlippi is a Professor at the Politecnico di Milano, Milano, Italy and UniversitàdellaSvizzeraitaliana, Lugano, Switzerland. Currently, he is a visiting professor at the University of Kobe, Japan, and the University of Guangzhou, China. He has been a visiting researcher at UCL (UK), MIT (USA), ESPCI (F), CASIA (RC), A*STAR (SIN). Alippi is an IEEE Fellow, Member of the Administrative Committee of the IEEE Computational Intelligence Society, Board of Governors member of the International Neural Network Society, Board of Directors member of the European Neural Network Society, Past Vice-President education of the IEEE Computational Intelligence Society, past associate editor of the IEEE Transactions on Emerging topics in computational intelligence, the IEEE Computational Intelligence Magazine, the IEEE-Transactions on Instrumentation and Measurements, the IEEE-Transactions on Neural Networks. In 2018 he received IEEE CIS Outstanding Computational Intelligence Magazine Award, the 2016 Gabor award from the International Neural Networks Society and the IEEE Computational Intelligence Society Outstanding Transactions on Neural Networks and Learning Systems Paper Award; in 2013 the IBM Faculty award; in 2004 the IEEE Instrumentation and Measurement Society Young Engineer Award. Current research activity addresses adaptation and learning in non-stationary environments and Intelligence for embedded and cyber-physical systems. He holds 8 patents, has published one monograph book, 6 edited books and about 200 papers in international journals and conference proceedings.

  • Title: Neural Graph Processing: an embedding-based approach
  • Abstract: Many fields, like physics, neuroscience, chemistry, and sociology, investigate phenomena by processing multivariate measurements advantageously represented as a sequence of attributed graphs. Graphs come in different forms, with variable attributes, topology, and ordering, making it difficult to perform a mathematical analysis in the graph space. Within this framework, we are interested in processing graph datastreams to solve applications e.g., detect structural changes in the graph sequence, a situation associated with time variance, faults, anomalies or events of interest as well as design sophisticated processing like those requested by predictors. On the change detection front, theoretic results show that, under mild hypotheses, the confidence level of an event detected in the graph domain can be associated with another confidence level in an embedding space; this enables the identification of events in the graph domain by investigating embedded data. The opposite holds. However, evaluation of distances between graphs and identification of an appropriate embedding for the problem at hand are far from being trivial tasks with deep adversarial learning approaches and constant curvature manifold transformation showing to be appropriate transformations able to solve the problem. Deep autoregressive predictive models can then be designed to operate directly on graphs, hence providing the building blocks for other future sophisticated neural processing.

Prof. Tarek El-Ghazawi
ECE Professor and Director Institute for Massively Parallel Applications and Computing Technology (IMPACT)The George Washington University, USA

Bio: Tarek El-Ghazawi is a Professor in the Department of Electrical and Computer Engineering at The George Washington University, where he leads the university-wide Strategic Academic Program in High-Performance Computing. His research interests include high-performance computing, computer architecture, reconfigurable computing and parallel programming. He is the founding director of The GW Institute for Massively Parallel Applications and Computing Technologies (IMPACT) and was a founding Co-Director of the NSF Industry/University Center for High-Performance Reconfigurable Computing (CHREC). He is one of the principal co-authors of the UPC parallel programming language and the primary author of the UPC book from John Wiley and Sons. Prof. El-Ghazawi has published well over 250 refereed research publications in this area. Prof. El-Ghazawi has served and is serving in many editorial roles including an Associate Editor for the IEEE Transactions on Computers and the IEEE Transactions on Parallel and Distributed Systems.

  • Title: Rebooting Computing- The Search for Post-Moore’s Law Breakthroughs
  • Abstract: The field of high-performance computing (HPC) or supercomputing refers to the building and using computing systems that are orders of magnitude faster than our common systems. The top supercomputer, Summit, can perform 148,600 trillion calculations in one second (148.6 PF on LINPAC). The top two supercomputers are now in the USA followed by two Chinese supercomputers. Many countries are racing to break the record and build an ExaFLOP supercomputer that can perform more than one million trillion (quintillion) calculations per second. In fact the USA is planning two supercomputers in 2021 one of which, when fully operational (Frontier), will perform at 1.5 EF. Scientists however are concerned that we are reaching many physical limits and we need new innovative ideas to make it to the next generation of computing. This talk will consider where we stand and where we are going with the current state of supercomputing with emphasis on future processors, and some of the ideas that scientists are looking at to re-invent computing. A comparative understanding of Nuromorphic and Brain-Inspired Computing, Quantum Computing and innovative computing paradigms will be provided along with an assessment of progress so far and the road ahead. Further, I cover some of our own progress on Nanophotnonic PostMoore’s law processing efforts.

Prof. Jean-Pierre Leburton
ECE Professor and Director Gregory Stillman Professor of Electrical and Computer Engineering,University of Illinois at Urbana-Champaign, USA.

Bio: Jean-Pierre Leburton is the Gregory E. Stillman Professor of Electrical and Computer Engineering and professor of Physics at the University of Illinois at Urbana–Champaign. He is known for his work on semiconductor theory and simulation, and on nanoscale quantum devices including quantum wires, quantum dots, and quantum wells. He studies and develops nanoscale materials with potential electronic and biological applications. Jean-Pierre Leburton was born on March 4, 1949 to Edmond Jules Leburton and Charlotte (Joniaux) Leburton in Liège, Belgium. His father, at one time Prime Minister of Belgium, sparked Jean-Pierre Leburton's interest in physics. Jean-Pierre Leburton received his Licence (B.Sc.) in Physics in 1971 and his Doctorat (Ph.D.) in 1978 from the University of Liège, Belgium. Leburton worked as a research scientist at the Siemens AG research laboratory in Munich, Germany from 1979 to 1981. From 1981-1983, Leburton worked at the University of Illinois at Urbana–Champaign (UIUC) as a visiting assistant professor. In 1983 he joined the faculty as an assistant professor. He became an associate Professor in 1987 and a full professor in 1991. He worked with Karl Hess, co-director of the Beckman Institute for Advanced Science and Technology, and became one of the original faculty members at the Beckman Institute in 1989. He held the Hitachi LTD Chair on Quantum Materials as a visiting professor at the University of Tokyo, Japan in 1992. He was also a visiting professor at the Swiss Federal Institute of Technology in Lausanne, Switzerland in 2000. He has published more than 300 papers in technical journals and books. His simulation tools and physical models help to describe behavior of quantum wires, quantum dots, and quantum wells. He has studied the optical properties of superlattices and established the index of refraction in superlattices both experimentally and theoretically.

  • Title: 2D Nano-Electronic Materials for Bio-sensing
  • Abstract: The last two decades have experienced rapid technological developments in the search of cheap and high accuracy devices for fast bio-molecular identification. In the realm of DNA and protein sequencing, there has been an increasing interest in the use of nanopores in solid-state materials because of their distinct advantage over biological pores in terms of flexibility in pore design and mechanical strength. Two-dimensional (2D) solid state materials such as graphene and Molybdenum di-sulphide (MoS2) in particular have attracted attention because of their atomically thin layered structure and electrically active characteristics, predisposing them to offer single base resolution and simultaneously multiple modalities of detecting biomolecular translocation. 2D nanopore devices promise seamless integration with semiconductor electronics and are poised to revolutionize a variety of technologies such as genomics, point-of-care diagnostics and digital data storage to name a few. The past year has witnessed a flurry of activity to experimentally realize nanopore Field Effect Transistors (FETs) and understand the fundamental sensing mechanism in such devices. Currently, the dominant consensus from theoretical calculations has involved the electrostatic modulation of the FET current due to the translocating biomolecules. In this talk, we review and provide insights into this sensing principle by modeling the electron flow through 2D material nanopore FETs. We describe a method to systematically characterize nanopores FETs by contrasting the changes in the FET behavior before-and-after nanopore drilling and DNA translocation. We outline measurable predictions of high-resolution FET based sensing of DNA-protein complexes and damaged DNA. We compare these FET signals to the corresponding ionic current signals calculated from all-atom Molecular dynamics simulations. Further, we also outline possible techniques to improve the detection SNR by augmenting pore and device design with statistical signal processing algorithms. Finally, we propose a scalable device design of nanopore FETs to detect and identify translocations of single-biomolecules in a massively parallel scheme.

Prof. Dr. Peter Peer
Vice-Dean Faculty of Computer & Information Science Computer Vision Lab University of Ljubljana, Slovenia, European Union.

Bio: Peter Peer was born on January 3, 1975 in SlovenjGradec. He obtained Doctor of Philosophy from University Ljubljana, Slovenia in 2003.Peter Peer currently works at the Faculty of Computer and Information Science, University of Ljubljana. Peter does research in Computer Vision, Biometrics, Artificial Intelligence, and Human-Computer Interaction. Peter Peer has been listed as a notable computer scientist, educator by Marquis Who's Who. He is the member of Member of Slovenian Pattern Recognition Society, Institute of Electrical and Electronics Engineers, Rotary Club.

  • Title: Generative Deep Neural Networks for Face Deidentification
  • Abstract: Image and video data are today being shared between government entities and other relevant stakeholders on a regular basis and require careful handling of the personal information contained therein. A popular approach to ensure privacy protection in such data is the use of deidentification techniques, which aim at concealing the identity of individuals in the imagery while still preserving certain aspects of the data after deidentification. In this talk, we present a novel approach towards face deidentification, called k-Same-Net, which combines recent Generative Neural Networks (GNNs) with the well-known k-Anonymity mechanism and provides formal guarantees regarding privacy protection on a closed set of identities. Our GNN is able to generate synthetic surrogate face images for deidentification by seamlessly combining features of identities used to train the GNN model. Furthermore, it allows us to control the image-generation process with a small set of appearance-related parameters that can be used to alter specific aspects (e.g., facial expressions, age, gender) of the synthesized surrogate images. We demonstrate the feasibility of k-Same-Net in comprehensive experiments on the XM2VTS and CK+ datasets. We evaluate the efficacy of the proposed approach through reidentification experiments with recent recognition models and compare our results with competing deidentification techniques from the literature. We also present facial expression recognition experiments to demonstrate the utility-preservation capabilities of k-Same-Net. Our experimental results suggest that k-Same-Net is a viable option for facial deidentification that exhibits several desirable characteristics when compared to existing solutions in this area.

Prof. Stephen Pistorius
Professor & Associate Head: Medical Physics (Physics & Astronomy) Professor (Radiology) Vice Director & Graduate Chair: Biomedical Engineering Graduate Program University of Manitoba, Canada.

Bio: Stephen is a Professor of physics and an Associate Professor of radiology with the University of Manitoba (UM), Winnipeg, MB, Canada, and a Senior Scientist with CancerCare Manitoba (CCMB), Winnipeg, MB, Canada, and the Research Institute of Oncology and Hematology (RIOH). He has experience in the military, in industry, health care, and in academia. He is a certified Medical Physicist, a licensed Professional Physicist, and a fellow of the Canadian Organization of Medical Physics (COMP). His research interests include cancer imaging for both early detection and optimized radiation therapy. He has supervised more than 50 students and Postdoctoral fellows. He has served as the President of COMP, and is a Vice-President Elect of the Canadian Association of Physicists. He is the Director of the CAMPEP accredited Medical Physics graduate program at the UM, and a Vice Director of the UM Biomedical Engineering Graduate Program.

  • Title: Towards Medical Imaging without images; Advanced Image Reconstruction and Machine learning in PET and Microwave Imaging
  • Abstract: Cancer mortality is higher in remote regions in Canada and in developing countries where access to early detection is limited. Mammography, the standard for breast cancer screening, uses ionizing radiation, requires breast compression, has a high false-positive rate, and requires a well-established human and capital infrastructure. Positron Emission Tomography (PET), an important functional imaging modality, also uses ionizing radiation, which scatters when it interacts with tissue, depositing dose, but traditionally not providing any value from an imaging perspective. Breast Microwave Imaging (BMI) and Scatter Enhanced PET potentially have improved sensitivity and specificity when compared to traditional mammographic x-ray and PET reconstruction techniques. Image reconstruction typically uses algebraic methods; iterative Delay and Sum (for microwave imaging) or Maximum Likelihood Expectation maximization (MLEM) approaches (for PET), but there are technical and computational constraints which limit their applicability. This presentation is a tour through time and place; starting with experiments that show the benefits of improved reconstruction techniques for microwave radar imaging for breast cancer detection, scatter imaging for PET and finishing with a demonstration as to how machine learning and artificial intelligence can be used to reconstruct images, and to detect breast lesions even when they may not be visible.

Prof. Viera Rozinajova
Institute of Informatics, Information Systems and Software Engineering (FIIT) Vice-dean - Faculty of Informatics and Information Technologies Slovak University of Technology, Bratislava, European Union

Bio: VieraRozinajova is an Associate Professor at the Institute of Informatics, Software Engineering and Information Systems, Faculty of Informatics and Information Technologies, Slovak University of Technology (FIIT STU) in Bratislava, Slovakia. Currently she is Vice Dean of the faculty, responsible for research, projects and industry cooperation, and Director of the Industrial Research Centre and head of the BIG Data Analysis group at FIIT STU. Previously Rozinajova worked as a researcher at the University of Stuttgart, Germany. She is author or coauthor of more than 60 publications in scientific journals and conferences. Rozinajova has also led several projects which were focused on big data analysis, software development and related research. She is a member of the International Federation for Information Processing (IFIP) TC8 (Information Systems) as a national representative of Slovakia, the Secretary of the Slovakia ACM Chapter, and a member of the Slovak Computer Science Society.

  • Title: Towards more effective Smart Grid using Data Analytics
  • Abstract: Energy belongs for a long time to the most important aspects of our lives. There are several new phenomenons in current smart grid environment: the energy is produced not only in traditional way, but it is supplied also by distributed renewable energy sources. Thus the consumers become also prosumers. Another aspect concerns the anticipated massive usage of electric vehicles, which will require bigger load – we have to prepare the grid also for this type of load. In this talk we will offer an insight into the contemporary approaches to ensuring more effective operation of smart grid. It is based on the analysis of huge datasets, generated by smart meters. Since traditional ways of data processing are no longer sufficient, an urgent need for novel approaches occurs. In order to manage smart grid effectively, we need smart technologies to manage power load forecasting, trading, load balancing and grid optimization. Our research team has developed successful solutions to address these challenges, making thus possible to achieve huge energy and financial savings.

Prof. Pierre Maréchal
Institut de Mathématiques de Toulouse Université Paul Sabatier Toulouse, France.

Bio: Prof. Pierre Maréchal is the professor of mathematics at Université Paul Sabatier, Toulouse, France. He obtained his PhD degree from University of Toulouse, with honors, in May 1997. He was Postdoctoral fellow at Simon Fraser University, Vancouver, Canada, from June 1997 to August 1999. He accredited to supervise research (Habilitation à Diriger des Recherches) in applied mathematics, at the University of Montpellier, in 2002. His research interest include Inverse problems, optimization, convex analysis and applications.

  • Title: Blind deblurring of barcodes via Kullback-Leibler divergence
  • Abstract: : Barcode encoding schemes impose symbolic constraints which fix certain segments of the image. The maximum entropy on the mean method enables the use of a prior probability distribution. We exploit these complementary features to develop an entropic method for blind barcode deblurring. We assess our results via both standard bar code reading software as well as smart phones.

Prof. Nong Ye
Director, Information and Systems Assurance Laboratory Ira A. Fulton School of Engineering Arizona State University, USA.

Bio: Dr. Ye is a full professor at Arizona State University. Her past and current research has received over $8M external funding support and has produced eighty-five journal papers and five books, including Data Mining: Theories, Algorithms, and Examples. Her recent research focuses on developing data mining algorithms to discover multivariate data associations for capturing both partial-value and full-value variable associations as well as both individual and interactive effects of multiple variables. New algorithms have been applied to cyber-attack detection, engineering retention and education, and energy systems modelling.

  • Title:Multivariate and Univariate Analysis of Engineering Student Data to Identify Engineering Retention Characteristics
  • Abstract: In many fields, relations of variables hold for only certain values of variables, or there are different relations for different values of variables. The Partial-Value Association Discovery (PVAD) algorithm discovers variable relations/associations that exist in partial ranges of variable values from large amounts of data in a computationally efficient way. This lecture shows how the PVAD algorithm is used in two applications. The first application uses the PVAD algorithm to analyze engineering student data. Partial-value data associations of engineering student characteristics are examined to identify engineering retention characteristics. The second application uses the PVAD algorithm to analyze network flow data from computer networks. Partial-value data associations of network flow characteristics are put together detect network anomalies for cyber security.

Prof. Gordon B. Agnew
Dept. of Electrical and Computer Engineering University of Waterloo, Canada.

Bio: Gordon B. Agnew is an Associate Professor in the Electrical and Computer Engineering department at the University of Waterloo. He is also the co-founder of Ceritcom Corp, which is a subsidiary of Blackberry that protects content and devices from around the world. Professor Agnew’s main research interests include cryptography, data security, communication security and high speed communication networks. He is a Foundation Fellow of the Institute for Combinatorics and its Applications. He is also a Fellow of the Canadian Academy of Engineering.

  • Title:Tracing How the Evolution of Cryptographic Systems is Tied to the Evolution of Computing
  • Abstract: In this talk we will examine the dramatic impact that advances in computing has impacted the evolution of cryptographic systems. One of the most significant areas is in Public Key Cryptographic systems. In the mid 1970's when the first public disclosure of Public Key Cryptography was made, computational complexity on personal computers made their use impractical. In 1985, the suggestion of using Elliptic Curves as a method of Public Key Cryptography as put forward. At the time, it was thought that ECC was impractical due to the computational complexity at that time. Today, Elliptic Curve Cryptography is used in many applications on smart phones. We will trace the history how improvements in computing power had a direct impact on cryptographic systems.

Professor Jason Crain
IBM Research United Kingdom & Visiting Professor, University of Oxford United Kingdom

Bio: Professor originally from New York City, received his undergraduate degree in Physics from the Massachusetts Institute of Technology (where he was an MIT-Japan program intern) and PhD from the University of Edinburgh. He was on the faculty of the University of Edinburgh for 25 years where he held the Chair of Applied Physics. He held senior management and executive positions (Head of Physical Sciences and Executive Director of Research) at the UK's National Physical Laboratory since 2007. In these roles he was responsible for the National Laboratory's research strategy, science quality and pipeline from research to commercial services and standards. He was a member of the Heads of Science and Profession (HoSEP) Board chaired by the UK's Chief Science Advisor. He is a Fellow of the Institute of Physics and Senior Visiting Fellow and Laboratory Council Member of the National Nuclear Laboratory. He has collaborated with IBM Research since 2003. He was appointed Visiting Professor at the University of Oxford in 2018.

  • Title: Next generation materials simulation - applications of electronic course graining to complex systems
  • Abstract: Atoms and molecules adapt to their environment through a hierarchy of electronic responses. These fundamental, many-body phenomena give rise to emergent behaviour across the physical and life sciences. However, their incorporation in predictive simulations of large systems faces significant challenges. We introduce here a new class of molecular model employing embedded quantum oscillators as a coarse-grained representation of the collective electronic responses. This representation generates all manifestations of long-range interactions. The resulting level of completeness in physical description enables isolated molecule properties to define model parameters, thereby eliminating fitting to condensed phase data. Thus, the framework provides a physical and intuitive basis for predictive, next-generation simulation which can be implemented efficiently on massively parallel computers. Path integral methods are introduced as a practical solution to the coarse-grained version of the many-body electronic problem which avoids artificial truncation of the interaction terms. Combined with molecular dynamics to evolve nuclei on the model electronic surface, through on-the-fly force computation, the method can simulate large soft matter systems at finite temperature. Example applications to specific systems will be presented

Prof. Sokrates T. Pantelides
University Distinguished Professor of Physics and Engineering William A. and Nancy F. McMinn Professor of Physics and Professor of Electrical Engineering, Vanderbilt University, USA. Distinguished Visiting Scientist, Oak Ridge National Laboratory, USA

Bio: Sokrates T. Pantelides received the Ph.D. degree in physics from the University of Illinois at Urbana-Champaign, Champaign, IL, USA, in 1973. He is the University Distinguished Professor of Physics and Engineering, the William A. and Nancy F. McMinn Professor of Physics, and the Professor of Electrical Engineering with Vanderbilt University, Nashville, TN, USA. He holds a secondary appointment as a Distinguished Visiting Scientist with the Oak Ridge National Laboratory, Oak Ridge, TN, USA. In 1994, he spent 20 years at the IBM T. J. Watson Research Center, Yorktown Heights, NY, USA, where he carried out theoretical research in semiconductors and served as a Manager, a Senior Manager, and the Program Director. He has authored or co-authored more than research articles and edited nine books. His research focuses on the structure, defect dynamics, and electronic properties of electronic materials, radiation effects, transport in molecules and thin films, and catalysis. Prof. Pantelides is a fellow of the American Physical Society, the Materials Research Society, and the American Association for the Advancement of Science. He was named Fellow of the Institute of Electrical and Electronics Engineers in 2015 for contributions to point-defect dynamics in semiconductor devices.

  • Title: Probing nanoscale materials by combining computation and microscopies
  • Abstract: The advent of high-performance computers and advanced algorithms enable quantum calculations in nanostructures with high accuracy, probing atomic configurations and electronic, magnetic, optical, and mechanical properties. Atomic-resolution microscopies such as scanning transmission electron microscopy and scanning tunneling microscopy provide complementary experimental observations. This talk will describe a number of examples, primarily in two-dimensional materials, where a combination of computation and microscopies lead to the discovery of new materials, e.g., fusion of bilayers into a new monolayer with novel stoichiometry1, 2 and intrinsically patterned 2D materials3, 4; new nanostructures, e.g., atomically precise graphene origami5 and monolayer amorphous carbon6; and new phenomena, e.g., unconventional ferroelectricity7. Collaborators, coauthors of cited papers, are acknowledged. The theory work was supported by the U.S. Department of Energy, grant No. DE-FG02-09ER46554. 1. J. Lin, S. Zuluaga, P. Yu, Z. Liu, S. T. Pantelides and K. Suenaga, "Novel Pd2Se3 Two-Dimensional Phase Driven by Interlayer Fusion in Layered PdSe2", Physical Review Letters 119, (2017). 2. S. Zuluaga, J. Lin, K. Suenaga and S. T. Pantelides, "Two-dimensional PdSe2-Pd2Se3 junctions can serve as nanowires", 2D Materials 5, 035025 (2018). 3. X. Lin, J. C. Lu, Y. Shao, Y.-Y. Zhang, X. Wu, J. B. Pan, L. Gao, S. Y. Zhu, K. Qian, Y.-F. Zhang, D.-L. Bao, L. F. Li, Y. Q. Wang, Z. L. Liu, J. T. Sun, T. Lei, C. Liu, J. O. Wang, K. Ibrahim, D. N. Leonard, W. Zhou, H. M. Guo, Y. L. Wang, S.-X. Du, S. T. Pantelides, and H,-J. Gao, "Intrinsically patterned two-dimensional materials for selective adsorption of molecules and nanoclusters", Nature Materials. 16, 5 (2017). 4. Z.-L. Liu, B. Lei, Z.-L. Zhu, L. Tao, J. Qi, D.-L. Bao, X. Wu, L. Huang, Y.-Y. Zhang and X. Lin, "Spontaneous Formation of 1D Pattern in Monolayer VSe2 with Dispersive Adsorption of Pt Atoms for HER Catalysis", Nano Letters (2019). 5. H. Chen, X.-L. Zhang, Y.-Y. Zhang, D. Wang, D.-L. Bao, Y. Que, W. Xiao, S. Du, M. Ouyang and S. T. Pantelides, "Atomically precise, custom-design origami graphene nanostructures", Science 365, 1036-1040 (2019). 6. C.-T. Toh, H. Zhang, J. Lin, A. S. Mayorov, Y.-P. Wang, C. Orofeo, D. B. Ferry, H. Anderson, Z. Guo, N. Kakenov, H. R. Sims, K. Suenaga, S. T. Pantelides, and B. Öyilmaz, "Synthesis and properties of free-standing monolayer amorphous carbon", Nature, in press (2019). 7. J. B. Brehm. S. M. Neumayer, L. Tao, A. O’Hara, M. Chyasnavichus, M. A. Susner, M. A. McGuire, S. V. Kalinin, S. Jesse, P. Ganesh, S. T. Pantelides, P. Maksymovych, and N. Balke, "Tunable quadruple-well ferroelectric van der Waals crystals", Nature Materials, in press (2019).

Prof. Vaidy Sunderam
Samuel Candler Dobbs Professor of Computer Science Chair, Department of Mathematics and Computer Science, Director, Computational and Life Sciences Strategic Initiative, Emory University, Atlanta, USA.

Bio: VaidySunderam is Samuel Candler Dobbs Professor of Computer Science at Emory University and Chair of the Department of Mathematics and Computer Science. His research interests are in parallel and distributed computing systems, security and privacy issues in spatiotemporal systems, high-performance message passing environments, and infrastructures for collaborative computing. His prior and current research efforts are supported by grants from NSF, DoE, AFOSR, and NASA and have focused on system for mobile computing middleware, collaboration, and heterogeneous metacomputing, including the PVM system and several other frameworks such as IceT, H2O, and Harness. Professor Sunderam teaches computer science at the beginning, advanced, and graduate levels, and advises graduate theses in the area of computer systems.

  • Title: Trends and Issues in Data Driven Dynamic Systems
  • Abstract: : Data driven systems are rapidly increasing in popularity and deployment, especially in spatio-temporal domains. Numerous smart devices collect and report observations, for individual and collective value, but with variable reliability and potential loss of privacy. We present several research contributions aimed at addressing: (1) task assignment in crowdsourcing systems with privacy protection whereby participants may be optimally assigned tasks in a geospatial setting without compromising their true locations; (2) truth discovery whereby reports or observations from multiple entities can be fused to improve the veracity of specific events; and (3) tensor factorization methods for extracting patterns from spatiotemporal data that can be used for a variety of applications. Models, issues, approaches, and preliminary results will be presented.

Prof. Khan M. Iftekharuddin
Associate Dean for Research and Graduate Programs, Director, Vision Lab. Batten College of Engineering and Technology Old Dominion University, USA

Bio: Dr. Khan M. Iftekharuddin is a professor in the department of electrical and computer engineering at Old Dominion University (ODU). He serves as the director of ODU Vision Lab. He is also a member of biomedical engineering program at ODU. Prior to ODU he was an associate professor in the department of electrical and computer engineering at the University of Memphis (U of M). There he also held a joint appointment with the joint graduate program in biomedical engineering at the U of M and University of Tennessee at Memphis as well as the bioinformatics program at U of M. He was an assistant professor of computer science and electrical and computer engineering at North Dakota State University (NDSU) before joining to the U of M in the fall of 2000.

  • Title: Quantitative Image Analysis for Brian Tumor Segmentation
  • Abstract: This talk will discuss an integrated quantitative image analysis framework to include all necessary steps such as MRI inhomogeneity correction, feature extraction, multiclass feature selection and multimodality abnormal brain tissue segmentation, respectively. We will demonstrate efficacy of intensity, and multiresolution texture features such as fractal dimension (FD) and multifractional Brownian motion (mBm) for robust tumor and abnormal tissue segmentation in brain MRI. We introduce several machine learning and deep learning methods for this purpose. Finally, we evaluate our method using in large scale public and private datasets.

Prof. Doan B Hoang
School of Electrical and Data Engineering Faculty of Engineering and Information Technology. University of Technology Sydney, Australia.

Bio: Doan B. Hoang is a Professor in the School of Electrical and Data Engineering, Faculty of Engineering and Information Technology, University of Technology Sydney (UTS). Currently, he leads research in Virtualized Infrastructures and Cyber Security (VICS). He was the Head of School, UTS School of Computing and Communications (2011-2015), the Director of iNEXT – UTS Centre for Innovation in IT Services and Applications in the School of Computing and Communications (2007-2017). His current research interests include: Resources Optimization of Software-defined Infrastructures (Cloud, SDN, and NFV), Quantitative metrics for Cyber Security Risks, Policy-driven Interaction for Cloud security, IoT Trust and Privacy, Provisioning IoT services on demand, and Intelligent models for Assistive Healthcare. Professor Hoang has published over 240 research papers and graduated 17 Ph.Ds and 8 Masters under his supervision. Before UTS, he was with Basser Department of Computer Science, University of Sydney. He held various visiting positions: Visiting Professorships at the University of California, Berkeley; Nortel Networks Technology Centre in Santa Clara, the University of Waterloo, Carlos III University of Madrid, Nanyang Technological University, Lund University, and POSTECH University. While on sabbatical at UC Berkeley and Nortel Networks, he participated and led several DARPA-sponsored projects including Openet, Active Networks, and DWDM-RAM: A Data-Intensive Service-on-Demand Enabled by Next Generation Dynamic Optical Networks. He has delivered keynotes at International Conferences in recent years including the 12nd National Conference on Fundamental and Applied IT Research (FAIR 2019, Hue, Vietnam) on The Industrial Internet Revolution and Digital Transformation Challenges, the 2017 NAFOSTED Conference on Information and Computer Science (NICS, Hanoi, Vietnam) on Software Defined Infrastructures and Software Defined Security, the 2016 IEEE NetSoft workshop on Security in Virtualised networks (NetSoft, Seoul, S. Korea), the 2014 Asia-Pacific Conference on Computer Aided System Engineering (APCASE, Bali, Indonesia) on research challenges in Cloud computing, Internet of Things and Big Data, and the 2013 IET/IEEE Second International Conference on Smart and Sustainable City (ICSSC) on Cloud Computing and Wireless Sensor/Actor Networks for Smart City (ICSSC, Shanghai, China).

  • Title: Cloud Security – quantitative metrics, software-defined policy driven interaction, and trust-based big data processing framework.
  • Abstract: The demand for security of cyber systems is ever-increasing as these critical infrastructures are constantly adapting to emerging sophisticated applications and interconnecting a vast number of IOT devices. This leads to a large attack surface a cyber-system has to cover to ensure its security. Cloud computing, in particular, has been truly adopted as a large-scale distributed computing paradigm. The rapid growth of both public and private cloud systems provides cloud users with various options in deploying computational and storage resources, especially for big data applications. These cloud systems expose unavoidably the vulnerability on data security and privacy due to their outsourced nature. Security and privacy have thus become a great concern in cloud computing in which users risk the leakage of their private data. However, existing cloud security models do not equip themselves with sufficient measures to protect cloud systems proactively as well as reactively. In this talk, we articulate several forward-looking security models and techniques for protecting cloud systems: quantitative metrics for measuring security risks, software-defined policy-driven interaction for detecting and predicting security breaches, and techniques for protecting cloud data

Prof. Jont Allen
Dept. of Electrical and Computer Engineering, University of Illinois at Urbana Champaign, USA

Bio: Jontallen is a Professor at Electrical and Computer Engineering, Urbana Illinois. During his 32 year AT&T Bell Labs career (after 1998, AT&T Labs) Allen specialized in nonlinear cochlear modeling, auditory and cochlear speech processing, and speech perception. While at AT&T Allen wrote more than 50 sole-authored journal articles on hearing, cochlear modeling, signal processing, room acoustics, speech perception and the articulation index (AI, a.k.a. speech transmission index (STI), Speech intelligibility index (SII)). In Aug. 2003 he join the ECE faculty, University of IL, Urbana where he teaches and works with his students on noninvasive objective diagnostic testing of cochlear and middle ear function, based on acoustic reflectance (aka impedance) methods of the middle ear, auditory psychophysics, speech processing for hearing aid applications (noise reduction and multiband compression), speech and music coding (bit-rate reduction) and speech perception (models of loudness and masking) and hearing aid transducer modeling. Allen has been a visiting scientist in the Departments of Otolaryngology of Columbia University, City university of New York, and University of Calgary, and was an Osher Fellow at the Exploratorium Museum, San Francisco ( He has been very active in IEEE and the ASA, running both major conferences (IEEE-ICASSP 1985, New York) and many small workshops.

  • Title: Desalination of sea water, nature’s way
  • Abstract: Earth is facing an existential crises like none we have ever witnessed. This is not simply a matter of global warming, which in 50 years predicts an average temperature increase of a few degrees. Such news, while highly disturbing, is a dramatic understatement of the seriousness of our problems. These problem may start with the earth’s population. Its possible that the resources required for life on Earth are not up to the demands of 8 billion people. Potable water and food are as serious a problem as the implications of global warming (e.g., rising oceans, extreme weather patterns, fires, etc.). Here we propose an interleaved multi-tasking plan to solve most if not all of these problems, packaged into one solution. While a specific plan is outlined, it is envisioned that there are many alternatives, not discussed. The guide should be ”Whatever works best should be tried and the winning alternatives adopted.” The proposal is a highly innovative approach that looks at all the thermodynamic alternatives to separating ocean water into its 96.7% potable water and 3.5% salts, followed by decomposing the remaining 3.5% of salts into NaCL (85.6%), Magnesium (3.7%), Sulfate (2.7%), and the final target, Calcium Carbonate & Calcium Bicarbonate (1.2%). The five processing steps proposed to convert the sea water (OCEAN) into potable water with the sun as the sole source of thermal energy (1.37 [kW/m 2 ]) contained evaporation, creating thousands of huge lakes (FRESH WATER LAKE) in all the many arid deserts throughout the earth. As these deserts are converted to jungles, they will remove the remaining CO2 from the air, through natural processes, solving the problem of global warming over the long term. One of the key components of this proposal is to bring together the faculty and students, drawing upon on the vast composite knowledge within our university population.

Prof. Theda Daniels-Race
M. B. Voorhies Distinguished Professor, School of Electrical Engineering and Computer Science and the LSU Center for Computation and Technology (CCT), Louisiana State University, USA

Bio: Dr. Daniels-Race also has a joint appointment to the Center for Computation and Technology at Louisiana State University. She is the founder of the Applied Hybrid Electronic Materials & Structures Laboratory as well as Director of the ECE Division’s Electronic Materials & Devices Laboratory. Her research has encompassed a range of studies upon electronic materials from the growth of compound semiconductors via molecular beam epitaxy (MBE), to investigations of electron transport in low-dimensional systems such as quantum wells, wires, and dots, to device design and fabrication. Her current work is in the area of hybrid electronic materials (HEMs) and involves studies of sample morphologies, nanoscale electronic behavior, and the design of apparatus for HEM deposition.

  • Title: Developments in Hybrid Electronic Materials at the Nanoscale for Next-Generation Applications
  • Abstract: As “small has hit the wall” (Moore’s Law) semiconductor based industries struggle to keep pace with consumer demands for smaller, faster, and ever more affordable electronic devices. By the same token, researchers operating under the broadly defined umbrella of nanoelectronics seek to challenge traditional design paradigms and fabrication practices as these are used to create solid-state devices. Thus, motivated by both scientific curiosity and societal needs, my research focus is in the area of HEMs or hybrid electronic materials. In this talk, I will provide an overview of my work in progress as principal investigator for the Applied Hybrid Electronic Materials & Structures (AHEMS) Laboratory, as I have established in the Division of Electrical and Computer Engineering at Louisiana State University. With an eye toward future nanoelectronic materials and devices, my team works to develop ways to deposit and characterize HEMs, as well as designs innovative yet low-cost apparatus and techniques, in order to explore these materials and their nanoscale properties. By taking a fundamental or “bottom up” approach to nanoelectronics, we investigate the unique physics and potential functions of HEMs as we look toward developments “beyond the transistor” in the next-generation of computing and related applications

Prof. Hamid Vakilzadian
Department of ECE, University of Nebraska-Lincoln, USA

Bio: Prof. Vakilzadian's academic career began in 1985 when he joined the faculty of the University of Nebraska-Lincoln as a Visiting Assistant Professor. Vakilzadian's field of expertise is in the area of modeling and simulation of continuous and discrete systems and their application in digital systems and their architectures. Vakilzadian has used his expertise in this area for the organization of national and international technical and educational conferences. He is appointed as chair of the IEEE West Area, PACE Committee Chair, and Technical Activities Committee Chair of Region 4 of IEEE. He has served as Vice President of SCS conferences and Vice President of SCS Education. Due to his leadership roles and editorial activities for SIMULATION: Transactions of The Society of Modeling and Simulation International, Dr. Vakilzadian has been awarded with the IEEE-USA Regional Professional Leadership Award; Nebraska Section Award for Leadership, Management, and Contribution to the Section; IEEE Region 4 Conference Committee Awards (2003-04, 2014); and SCS Distinguished Service Award, 2011. He has secured funding from sources such as NSF, NMIH, Altera, and HP as a PI and Co-PI for his scientific work.

  • Title: Classification and Prediction of Heart Diseases Using Data Mining Techniques
  • Abstract: An analysis for assessing the risk of developing heart disease using data mining techniques is presented. The processed Hungarian medical data set from the University of California, Irvine, was used for analysis and prediction. Twenty out of seventy-four medical and physical attributes were selected for this study. The attributes were selected based on supervised and unsupervised techniques from the list, such as cholesterol level, heart beat rate, level and duration of physical activity, etc. Principal component analysis was used to determine the correlation among the attributes for their selection. In order to establish a baseline for classification, five different classifiers were tested using WEKA. In addition, the patient data were provided to an artificial neural network with one to three hidden layers for classification after training and testing. The results of the baseline classifier, Bayesian classifier, decision tree, nearest neighbor, and support vector machine were determined. It was shown that the decision tree and support vector machine models achieved correct classification rates of 88.1% and 88.5% with precision rates of 86.20% and 83.80%, respectively. This is about 10% or better correct classification compared to the other methods. The classification accuracy of the neural network with 100 to 200 neurons varied between 73.81% and 85.03%.

Prof. Alan Sprague
Electrical and Computer Engineering University of Alabama at Birmingham, Birmingham, United Kingdom.

Bio: Prof. Alan Sprague is an emeritus Professor in the Computer Science Dept. at the University of Alabama at Birmingham (UAB) and a Visiting Professor at the UAB Electrical & Computing Engineering Dept. Prof. Sprague received a PhD in Mathematics from the Ohio State University, in the field of Combinatorics, in 1973. He also received a Ph.D. in Computer Science from the Ohio State University in 1988. Dr. Sprague’s expertise in Math and Computer Science informed his graduate teaching which focused on theoretical foundations of Computer Science. His research discoveries in data mining include detection of outliers in large datasets. He has received two US patents for data mining methods and procedures for hospital infection surveillance

  • Title: Tying Ethics & Teamwork Training for Computing Students - Toward Ethics-in-Action Apps
  • Abstract: Most Computer Science and Engineering curricula in the US require ethics training and teamwork skills and the Accrediting Board for Engineering and Technology mandates that ethics and teamwork be course objectives in at least one course in these majors. However, these subjects are difficult to cover and sometimes neglected. Absent student training on specific teamwork skills, unsatisfactory team work experiences, may lead students to fear and avoid teamwork, Lingard (2010). At Harvard University, Professors Grosz (CS) and Simmons (Philosophy) have designed a ground-breaking model “EthiCS” which pairs CS faculty with Philosophy graduate students to find ethics-rich topics in CS courses and embed discussion of those topics in the curricula of each of 13 CS courses (Grosz, 2019). At the University of Alabama at Birmingham, a Computer Science professor (Sprague) and an ethicist (Diaz-Sprague) are pilot testing a 4-class period minimodule on Ethics and Teamwork embedded in CS Capstone courses and advanced ECE courses. The first two class periods offer a refresher on ethical principles and tips on moral reasoning, based on segments of video lectures by Michael Sandel: “Justice: What’s the Right Thing to Do” and Frans de Waal: “On Animal Morality.” These are followed by guided discussions in which students participate as teams. Outside of class, students read about teamwork and each student team selects and practices a teambuilding game. On days 3 and 4 the teams lead the class in playing the game. We report results and observations. Also, since the Capstone course requires students work on teams on a software engineering design project and EE485 is a prerequisite for EE498 the design course for ECE students, we are encouraging students to develop some type of moral guidance tool. The apps may be tips for de-escalating conflict, random acts of kindness or be a repository of time-tested proverbs, aphorisms, or moral codes derived from many religions or cultures. Currently, our team is running an Art, Essay of App Design Challenge or Ethics-in-Action contest, open to all interested students, regardless of institution. See

Prof. Martin Fabian
Dept. of Electrical engineering Chalmers University of Technology, Gothenburg, Sweden

Bio: Martin Fabian was born in Gothenburg, Sweden, in 1960. He received the Ph.D. degree in control engineering in 1995 from the Chalmers University of Technology, Gothenburg, Sweden. Martin Fabian is Head of the Automation research group and Full Professor of Automation at the Department of Electrical Engineering, Chalmers University of Technology. His research interests involve generic architectures for flexible production systems, modelling and supervisory control of discrete-event systems, and modular and compositional methods for complex systems.

  • Title: The Supervisory Control Theory; a Classical AI Approach Comes of Age
  • Abstract: : The Supervisory Control Theory (SCT) takes a control-theoretic approach to systems modeled by finite-state machines (automata), which is why it can be said to belong to the realm of what has lately come to be called "classical AI". The SCT community has now worked on problems within this context for more than 30 years, in many different research directions, but the main issue being the ability to handle the computational complexity. Recent breakthroughs in this research have now made the SCT an approach useful for industrially interesting problems. This talk will present some of the latest development in the SCT coupled to real industrial applications. New algorithms and a tool, Supremica, implementing them will be presented. One industrial example will be the verification of the lane change algorithm for an autonomous car, were a serious bug was found. This was an MSc project that eventually led to the financing of several industrial PhD students. Another example will be the development of new control systems for Dutch water locks. This was a project financed by the Rijkswaterstaat, the Dutch Ministry of Infrastructure and Water Management, which led to new algorithms for automatic generation of coordination of large numbers of sensors and actuators according to given requirement specifications. In addition, a brief introduction to relevant parts of automata theory and the SCT will be given since it is assumed that the audience is not very knowledgeable in the SCT. Supremica is available for download, free for education and research, at

Prof. Fred Choobineh
Blackman Distinguished Professor of College of Engineering. Dept. of Electrical and Computer Engineering, University of Nebraska-Lincoln, USA.

Bio: F. Fred Choobineh (M’89) received the B.S.E.E., M.S.I.E., and Ph.D. degrees from Iowa State University Ames, IA, USA, in 1972, 1976, and 1979, respectively. He is currently a Professor of electrical engineering and Blackman Distinguished Professor of Engineering with the University of Nebraska-Lincoln, Lincoln, NE, USA, where he also holds a courtesy appointment as a Professor of Management. His current research interests include electricity market and smart grid modelling and analysis. His research has been funded by the NSF and industry. Prof. Choobineh was the recipient of the Nebraska Legislator Award for Distinguished Teaching in the College of Engineering and is a Fellow of the Institute of Industrial Engineers. He is a registered Professional Engineer in Nebraska, where he also serves on the state Board of Engineers and Architects.

  • Title: Stochastic systems evaluation and selection criteria and their computational challenges
  • Abstract: Stochastic systems are generally evaluated and compared using the expected value of one or more performance metric of interest. The use of expected value is prevalent in engineering applications and its use implicitly implies that the designers and/or system operators are risk neutral. However, designers and system operators rightfully are concerned about the system’s risk exposure and therefore should use an evaluation metric that is sensitive to risk. In this talk we review some potential risk-sensitive evaluation criteria and discuss their computational and statistical challenges. We conclude the talk by highlighting a few engineering research areas where risk-sensitive evaluation criteria have been used.

Prof. Suresh Subramaniam
Professor and Chair Department of Electrical and Computer Engineering George Washington University, Washington DC, USA

Bio: Suresh Subramaniam received the Ph.D. degree in electrical engineering from the University of Washington, Seattle, in 1997. He is Professor and Chair of Electrical and Computer Engineering at the George Washington University, Washington DC, where he directs the Lab for Intelligent Networking and Computing. His research interests are in the architectural, algorithmic, and performance aspects of communication networks, with current emphasis on optical networks, cloud computing, data center networks, and IoT. He has published over 200 peer-reviewed papers in these areas, and his research has been supported by several grants from the US National Science Foundation and the Defence Advanced Research Projects Agency (DARPA).Dr. Subramaniam is a co-editor of three books on optical networking. He has served in leadership positions for several top conferences including IEEE ComSoc’s flagship conferences of ICC, Globecom, and INFOCOM. He serves on the editorial boards of the IEEE/ACM Transactions on Networking and the IEEE/OSA Journal of Optical Communications and Networking. During 2012 and 2013, he served as the elected Chair of the IEEE Communication Society Optical Networking Technical Committee. He received the 2017 SEAS Distinguished Researcher Award from George Washington University, and was elected Fellow of the IEEE in 2015 for his “contributions to optical network architectures, algorithms, and performance modeling”.

  • Title: The Evolution of Data Center Network Architectures
  • Abstract: Our society is becoming increasingly dependent on analyzing huge amounts of data generated in a large variety of ways. Analyzing the “big data” in the cloud requires a tremendous amount of computing and storage resources, and data centers have emerged as the workhorses of the cloud. Large data centers already consist of tens of thousands of servers, and are expected to scale to hundreds of thousands or even millions of servers with total throughputs on the order of several Tbps. Data centers are extremely power-hungry, and already account for over 2% of worldwide energy consumption. Designing data center networks that are both scalable and energy-efficient is very challenging. To address this challenge, the architecture of the data center network has evolved from the conventional multi-layer architecture to modern approaches that marry electronics and optics. This talk takes a look at this evolution and discusses emerging alternatives.

Prof. Anna Soffía Hauksdóttir
Professor, Department of Electrical and Computer Engineering Engineering and Physical Sciences University of Iceland Reykjavík, Iceland

Bio: Anna SoffiaHauksdottir graduated from the University of Iceland with a CS degree in Electrical Engineering in 1981. She completed her MS and PhD degrees from The Ohio State University majoring in control systems in 1983 and 1987, respectively. She received the "Centennial Keys to the Future Award" from the IEEE Vehicular Technology Society in 1984. She has been a professor at the Electrical and Computer Engineering Department of the University of Iceland since 1988. She is a senior member of the IEEE. She received the Knights Cross of the Icelandic order of the Falcon for electrical engineering research achievements in 1998 and the gold medal of the Icelandic Society of Chartered Engineers for contributions to control applications in 2001.

  • Title: The Ups and Downs of Closed Form System Response
  • Abstract: Closed form expressions for solutions to MIMO systems are typically derived by means of Laplace transforms. Central to these expressions is the matrix exponential $e^{At}$ and they usually include the eigenvalues and the coefficients of the characteristic equation. In this paper, we derive such expressions by focusing, instead, on the fundamental solution of the underlying differential equation. The advantage of this is twofold. First, it simplifies the derivation of such expressions. Second, since we present an effective procedure for the evaluation of the fundamental solution and its derivatives, it can be used as a basis for procedures to evaluate these expressions both numerically and symbolically. These expressions lead to formulae for evaluating the response directly at any time when the input is impulse, polynomial or harmonic. For more general inputs these formulae can be used to derive time stepping procedures based on piecewise polynomial approximations. However, the expressions typically become numerically unstable when the systems reach a size of 15-20. Thus we also derive analogous matrix formulae that result in numerically stable expressions for the same cases, when they are e.g. evaluated by Taylor series approximations. From an educational viewpoint we see it to be valuable to relate in this way direct expressions for responses of small systems presented in textbooks to the numerical procedures used in routines for large systems.

Prof. Israel Koren
Dept. of Electrical and Computer Engineering University of Massachusetts, USA

Bio: Israel Koren is a Professor Emeritus of Electrical and Computer Engineering at the University of Massachusetts, Amherst and a fellow of the IEEE. He has been a consultant to companies like IBM, Analog Devices, Intel, AMD and National Semiconductors. His research interests include Fault-Tolerant systems, cyber-physical systems, secure cryptographic devices, Computer architecture and computer arithmetic. He publishes extensively and has over 300 publications in refereed journals and conferences. He is the author of the textbook "Computer Arithmetic Algorithms,” 2nd Edition, A.K. Peters, Ltd., 2002, and a co-author of the textbook “Fault Tolerant Systems," Morgan-Kaufman, 2007.

  • Title: Detecting and counteracting benign faults and malicious attacks in cyber physical systems
  • Abstract: The use of cyber-physical system (CPS) is rapidly expanding and many of their applications require a highly reliable and secure implementation as they control critical infrastructures or even life-critical devices. Unfortunately, current techniques for achieving high reliability and security incur high overheads. In particular, integrating countermeasures against security attacks is problematic as security threats are often not well defined, evolve continuously, and as a result, many CPSs often remain vulnerable. We propose to exploits the physical plant state information to enhance both reliability and security. Our approach, which monitors the controlled plant state trajectory, allows for tunable fault-tolerance as well as detection of malicious attacks, and it achieves these at a low overhead. The plant state space consists of safe and marginal state subspaces. In the safe subspace the CPS will continue its safe operation even if the worst case control signal is applied. In contrast, any erroneous control applied when the plant state is marginal, may lead to a catastrophic system failure. Such an erroneous control output may be due to either a benign fault or a malicious security attack. As most of the time the plant will be deep within its safe subspace, we can avoid using expensive redundancy techniques and thus, reduce the computational load while still guaranteeing safe operation. When a marginal state of the plant is detected, it will signal the potential presence of a "natural" fault or malicious attack. Our scheme will counter this by switching to a critical mode involving higher levels of redundancy to combat natural failures as well as alternative mechanisms to defeat malicious attacks. A major challenge in our approach is to determine, in real-time, whether the current state of the physical plant is deep within its safe sub-space or is marginal. We have used various machine learning techniques for classifying the state and our results indicate that with a reasonable number of entries in a lookup table and with a short execution time, the required classification can be performed efficiently.

Prof. Jason O'Kane
Director, Center for Computational Robotics, Department of Computer Science and Engineering University of South Carolina, USA.

Bio: Jason M. O'Kane is Professor of Computer Science and Engineering, Associate Chair for Academics in the Department of Computer Science and Engineering, and Director of the Center for Computational Robotics at the University of South Carolina. He holds the Ph.D. and M.S. degrees from the University of Illinois at Urbana-Champaign and the B.S. degree from Taylor University, all in Computer Science. He has been awarded the CAREER Award from the US National Science Foundation and was a member of the DARPA Computer Science Study Group. He serves as Associate Editor for the IEEE Transactions on Robotics and the IEEE Robotics and Automation Letters. His research spans algorithmic robotics, planning under uncertainty, and computational geometry.

  • Title: Coverage Planning for Mobile Robots with Constrained Motion and Limited Sensing
  • Abstract: Autonomous mobile robots must operate effectively in spite of movement and sensing capabilities that are often incomplete and unreliable. One recurring example of this confluence of challenges is the coverage problem, in which the robot must move across every point in a region of interest. Coverage algorithms have important applications in environmental monitoring, cleaning, humanitarian demining, painting, and exploration. Known algorithms for such problems generally cannot account directly for realistic limitations on the robot's sensors and actuators. This talk will present our work that overcomes these limitations, leading to results that include both hardness results and practical coverage algorithms for terrestrial, aquatic, and aerial robots.

Prof. Xenofon Koutsoukos
Associate Chair, Department of Electrical Engineering and Computer Science Vanderbilt University, USA.

Bio: XenofonKoutsoukos is a Professor with the Department of Electrical Engineering and Computer Science and a Senior Research Scientist with the Institute for Software Integrated Systems, Vanderbilt University, and Nashville, TN, USA. He was a Member of the Research Staff with the Xerox Palo Alto Research Center (2000–2002), working in the embedded collaborative computing area. He has published more than 250 journal and conference papers and he is the Co-Inventor of four U.S. patents. His research interests include the area of cyber-physical systems with an emphasis on formal methods, data-driven methods, distributed algorithms, security and resilience, diagnosis and fault tolerance, and adaptive resource management., Dr. Koutsoukos was the recipient of the NSF Career Award in 2004, the Excellence in Teaching Award in 2009 from the Vanderbilt University School of Engineering, and the 2011 NASA Aeronautics Research Mission Directorate Associate Administrator Award in Technology and Innovation.

  • Title:System Science of Security and Resilience of Cyber-Physical Systems
  • Abstract: The exponential growth of information and communication technologies have caused a profound shift in the way humans engineer systems leading to the emergence of closed loop systems involving strong integration and coordination of physical and cyber components, often referred to as cyber-physical systems (CPS). Complex CPS abound in modern society and it is not surprising that many of these systems are safety and mission critical that makes them a target for attacks. The talk will present principles and methods for designing and analysing resilient CPS architectures that deliver required service in the face of compromised components. A fundamental challenge is to understand the basic tenets of CPS resilience and how they can be used in developing resilient architectures. The proposed approach integrates redundancy, diversity, and hardening methods for designing either passive resilience methods that are inherently robust against attacks and active resilience methods that allow responding to attacks. In addition, we will describe a modelling and simulation integration platform for experimentation and evaluation of resilient CPS using smart transportation systems as the application domain. Evaluation of resilience is based on attacker-defender games using simulations of sufficient fidelity.

Prof. Peter Puschner
Institute of Computer Engineering Technische Universitaet Wien Vienna, Austria

Bio: Peter Puschner is a professor in computer science at Vienna University of Technology. P. Puschner’s main research interest is on hard real-time systems for safety-critical applications, with a focus on the worst-case execution time (WCET) analysis of real-time programs and software/hardware architectures for time-predictable computing. P. Puschner has been coordinator and researcher in a number of research projects. At the European level, he has been involved in the EC-funded projects PDCS, DeVa, SETTA, NextTTA, and DECOS, was an executive-board member in the EC-funded Network of Excellence on distributed and dependable systems (CaberNet), and is a member of the ARTIST2 and ARTIST Design networks of excellence on embedded systems. Nationally, he has coordinated a number of projects funded by the Austrian Science Fund (FWF) and projects funded within the FIT-IT embedded-systems initiative of the Austrian ministry of Transport, Innovation and Technology. P. Puschner is a member of the IEEE Computer Society, Euromicro, the OCG (Austrian Computer Society), and the Marie-Curie Fellowship Association.

  • Title: Temporal Control in Multi-Component Systems
  • Abstract: The talk addresses the problem of providing timing guarantees in systems that consist of multiple interacting components. Without precaution, resource competition for the communication network and interference caused by incoming communication requests undermine the temporal predictability of such systems. We therefore propose to use controlled, time-triggered communication to restrict the interference between interacting components at the network level. For the realization of components, on the other hand, we provide two strategies to hide respectively avoid disturbances caused by incoming communication. The combination of these two strategies allows engineers to build multi-component systems that are fully time-predictable, i.e., they provide time-predictability and temporal control both at the system level and the component level.

Prof. Ilangko Balasingham
Department of Electronic Systems Norwegian University of Science and Technology (NTNU) Trondheim, Norway.

Bio: IlangkoBalasingham received the M.Sc. and Ph.D. degrees from the Department of Electronic Systems, Norwegian University of Science and Technology (NTNU), Trondheim, Norway in 1993 and 1998, respectively, both in signal processing. He performed his Master’s degree thesis at the Department of Electrical and Computer Engineering, University of California Santa Barbara, USA. From 1998 to 2002, he worked as Research Engineer developing image and video streaming solutions for mobile handheld devices at Fast Search & Transfer ASA, Oslo. Fast was a startup and was acquired by Microsoft Inc. and renamed as Microsoft Development Center Norway. Since 2002 he has been with the Intervention Centre, Oslo University Hospital, where he is Head of Section for Medical ICT Research and Head of Wireless Biomedical Sensor Network Research Group. He is Professor of Medical Signal Processing and Communications at NTNU since 2006. His research interests include super robust short range communications for both in-body and on-body sensors, body area sensor network, microwave short range sensing of vital signs, short range localization and tracking mobile sensors, and molecular and nano communication networks. He has authored or co-authored more than 90 journal papers, 170 full conference papers, 8 book chapters, 42 abstracts, and 16 articles in popular press and holds 6 issued patents and 10 disclosure of inventions. He has supervised 21 Postdocs, 21 PhDs, and 30 Masters. He was the General Chair of the 2012 Body Area Networks (BODYNETS) and 2019 IEEE International Symposium of Medical Information and Communication Technology (ISMICT) and TPC Chair of the ACM NANOCOM 2015. He serves as Area Editor of Elsevier Nano Communication Networks and Steering Committee Member of ACM.

  • Title: Internet of smart implants – micro-nan scale devices with artificial intelligence connected to 5G networks
  • Abstract: Wireless body area networks enable ingestible, implantable, insertable and on-body sensors and actuators to be connected to wireless networks in a seamless, reliable, secure manner interfacing with the Internet. The networks enable remote health status monitoring, diagnosis, and treatment delivery. My talk will present the latest results on wireless communication technologies for implants including radio frequency and human body galvanic communication to address next generation leadless pacemakers and robotic capsule endoscopes. I will show how machine learning/deep learning methods are used for improved diagnosis and drug delivery. I will briefly introduce an emerging scientific field called molecular communication technology and will give some examples on how to signal and communicate with human cells aiming to connect them to the Internet. The talk will highlight some of the novel applications and ongoing research projects funded by the EU.

Prof. Raquel Diaz-Sprague
Electrical and Computing Engineering, University of Alabama at Birmingham, Birmingham Alabama 35205, USA

Bio: Dr. Raquel Diaz-Sprague is an interdisciplinary scholar, biochemist and ethicist, a member of the Association for Practical & Professional Ethics (APPE) and of the Association for Women in Science (AWIS). She is currently a candidate for the APPE Board of Directors. Her work focuses on ethical and societal issues in science, technology, and medicine. She was previously affiliated with the Ohio State University (OSU), Biochemistry Department, as Director of the Women in Science Day Program and as Adjunct Instructor in the OSU College of Medicine. Her presentations and workshops include at the Computer Ethics & Philosophical Inquiry (CEPE 2014) and at the Association for Computational Linguistics (ACL 2017) conferences. She has published dozens of conference proceedings and a chapter in the book “Navigating Academia: A Guide for Women and Minority STEM Faculty” (Elsevier Press, 2015). A Fulbright Fellow, she has been honored with an OSU Alumni Excellence Award, has been elected an AWIS Fellow and inducted into the Ohio Women’s Hall of Fame. She was a Visiting Scholar at the University of Alabama (UAB) Computer Science Department (2015-2017). Since 2017 she is a Visiting Scholar at the UAB Electrical & Computing Engineering Department.

  • Title: Tying Ethics & Teamwork Training for Computing Students - Toward Ethics-in-Action Apps
  • Abstract: Most Computer Science and Engineering curricula in the US require ethics training and teamwork skills and the Accrediting Board for Engineering and Technology mandates that ethics and teamwork be course objectives in at least one course in these majors. However, these subjects are difficult to cover and sometimes neglected. Absent student training on specific teamwork skills, unsatisfactory team work experiences, may lead students to fear and avoid teamwork, Lingard (2010). At Harvard University, Professors Grosz (CS) and Simmons (Philosophy) have designed a ground-breaking model “EthiCS” which pairs CS faculty with Philosophy graduate students to find ethics-rich topics in CS courses and embed discussion of those topics in the curricula of each of 13 CS courses (Grosz, 2019). At the University of Alabama at Birmingham, a Computer Science professor (Sprague) and an ethicist (Diaz-Sprague) are pilot testing a 4-class period minimodule on Ethics and Teamwork embedded in CS Capstone courses and advanced ECE courses. The first two class periods offer a refresher on ethical principles and tips on moral reasoning, based on segments of video lectures by Michael Sandel: “Justice: What’s the Right Thing to Do” and Frans de Waal: “On Animal Morality.” These are followed by guided discussions in which students participate as teams. Outside of class, students read about teamwork and each student team selects and practices a teambuilding game. On days 3 and 4 the teams lead the class in playing the game. We report results and observations. Also, since the Capstone course requires students work on teams on a software engineering design project and EE485 is a prerequisite for EE498 the design course for ECE students, we are encouraging students to develop some type of moral guidance tool. The apps may be tips for de-escalating conflict, random acts of kindness or be a repository of time-tested proverbs, aphorisms, or moral codes derived from many religions or cultures. Currently, our team is running an Art, Essay of App Design Challenge or Ethics-in-Action contest, open to all interested students, regardless of institution. See

Prof. Gregor Rozinaj
Faculty of Electrical Engineering and Information Technology Institute of Multimedia ICT Slovak University of Technology, Bratislava, Slovakia, European Union.

Bio: Gregor Rozinaj is a professor at the Faculty of Electrical Engineering and Information Technologies, Slovak University of Technology in Bratislava. Currently, he is a director of the Institute for Multimedia ICT, FEI STU. He has published approximately 150 papers in scientific journals and international conferences. He is the author of 4 patents, 3 of them worldwide. His scientific school consists of 14 successfully finished PhD students and more than 120 MSc students. He has served as a principal investigator on more than 20 research projects, among them Horizon2020/FP7 projects. His research interest is oriented to multimedia processing, optimization techniques, fast algorithms, HCI. Previously, he worked at the University of Stuttgart, Germany and Alcatel Research Centre in Stuttgart, Germany.

  • Title: Multimedia processing for on-line Education
  • Abstract: Human desire to see, hear and feel something what is distant and far away from us and to transfer the view on life around us to far distance is a main inspiration for Communications. For more than 100 years, low quality voice transmission (phone) extended with very low speed text transmission (fax) used to be the only possibility for distant information exchange. Within couple of last years we see extraordinary change in the way of communications. Dominance of voice communications is over and multimedia communication (including texting) is a new but already common period in Communications. In spite the fact, that technology industry offers high speed communications channels, an amount of transferred data is so huge, that transfer optimization, adaptive multimedia content delivery, recommendation engine, user experience are very important keywords to fulfill user requirements for information retrieval. Telereality or visualization of distant reality seems to be a new milestone in communications. Multimedia communications bring new opportunities to many areas of our lives. Education based on modern technologies and approach offers an excellent area for new breathtaking multimedia applications in on-line education.

Porf. Harold Pardue
Dean, Graduate School School of Computing Associate Vice President for Academic Affairs University of South Alabama, USA.

Bio: Prof. Harold Pardue is the Dean of graduate school, School of Somputing, and also Associate Vice President for Academic Affairs at University of South Alabama, USA. He did Bachelor of Arts with major in Philosophy from University of South Alabama in 1989, and he did Master of Computer and Information Sciences from University of South Alabama in 1991. His dissertation is titled “A Systems Analysis of the Commercialization of Information Technologies by IT Producing Industries” and he was awarded Doctor of Philosophy in 1996 by College of Business, Florida State University. His research interests include security and risk assessment in healthcare systems, trust in computer-mediated environments, and information systems and healthcare education.

  • Title: Explainable AI and Challenges to Scholarly Review
  • Abstract: A critical component of the review, assessment, and dissemination of scholarly, scientific work in Artificial Intelligence is the transparency of the phenomenon. As algorithms become less demonstrably deterministic and increasingly complex, the scientific community is faced with the challenge of how to “explain” the observed phenomenon and findings. Another challenge is establishing the repeatability of the results. AI based systems increasingly do not lend themselves to classic, directly observable stimulus-response tests. A promising approach to address this lack of transparency and repeatability is to integrate research methods from the social sciences.