Tutorials


In this edition of CBIC we have received many interesting tutorial proposals. We have selected the following for presentation:

Cristina Nader Vasconcelos (IC/UFF)
Convolutional Neural Networks: from Neocognitron to ResNet

Summary:
The tutorial will cover fundamentals of deep convolutional neural networks and its evolution from Fukushima’s Neocognitron to modern ResNet. Sample code and APIs will be presented.

Mini-bio(s):
Cristina Nader Vasconcelos is an Assistant Professor at the Fluminense Federal University. She received her Masters and PhD in Informatics by PUC-Rio and her bachelor degree in Computer Science from the Federal University of Rio de Janeiro. Her area of expertise includes different areas of visual computing such as Computer Graphics, Image Processing, Computer Vision and Generic Processing in GPU. From 2010 to now, her research focus on Pattern Recognition and more recently in particular Deep Learning.

Ana Cristina Bicharra Garcia and Mark Klein (UNIRIO - MIT)
Crowd Computing: From Human Computation to Collective Intelligence

Summary:
Crowd computing - systems where crowds (of people) and clouds (of computers) work together to make better decisions than any could make individually - exploded onto the scene in the last decade or so with such startling success stories as Wikipedia, Linux, Mechanical Turk, Google search, Galaxyzoo, Sermo, Slashdot, Seeclickfix, fold.it and hundreds of others, with profound and still emerging impacts on everything from science and education to entertainment, business, and government. This phenomenon is many-sided, ranging from human computation (where humans act as subroutines, performing simple micro-tasks to support some larger computation) to collective intelligence (where computers support communities in making decisions about our species’ most complex and pressing problems). This tutorial will help participants:

  • understand what makes crowd computing so potentially powerful
  • be familiar with the key types of crowd computing technology, including their strengths and weaknesses
  • identify promising directions for future research in this area.

Mini-bio(s):
ACBG is a Full Professor at Departamento de Informática Aplicada, UNIRIO. She was a professor at UFF from 1994-2017. She did her master and PhD studies at Stanford University. She was a visiting scholar at Stanford in 2002 and at MIT in 2013. She advised 8 PhDs and 30 Master’s students. She founded and coordinated until 2017 the ADDlabs, a research lab in artificial intelligence. MK is a Principal Research Scientist at the MIT Center for Collective Intelligence, as well as an Visiting Researcher at the University of Zurich. His research focuses on understanding how computer technology can help groups, especially large ones, make better decisions about complex problems. He has made contributions in the areas of computer-supported conflict management for collaborative design, design rationale capture, business process re-design, exception handling in workflow and multi-agent systems, service discovery, negotiation algorithms, ‘emergent’ dysfunctions in distributed systems and, more recently, ‘collective intelligence’ systems to help people collaboratively solve complex problems like global warming.

Cidiney Silva (Eletrobras Furnas PPGEE/UFMG)
A Hybrid Method for Forecasting in Smart Grids Scope

Summary:
Smart Grids emerge as the next technological breakthrough to be achieved for systems of power generation, transmission and distribution. In a Smart Grid, the boundaries between generation and consumption/distribution are blurred. Therefore loading forecasting and generation forecasting processes are signifcantly different from the ones for stablished/legacy power systems. This tutorial turns to the study of forecasting methods in the scope of Smart Grids and how these methods are applicable in their intelligent behavior. It is imperative to have a consistent framework for short-term prediction meeting operating characteristics of Smart Grids. ARMA-like traditional models of time series prediction has been applied in Electric Power Systems, such as SARIMA and SARFIMA. To improve the accuracy of these methods, hybrid methods will be developed integrating fuzzy logic to SARIMA and SARFIMA models. The proposed models are based on the technique of Fuzzy Time Series (FTS). The proposed models meet the need for methods that rely less on strong stationarity assumptions and are parsimonious in its parameters, even though it deals with stochastic long memory processes. In this tutorial, proposed algorithmic solutions will be shown and analyzed, mainly in the form of hybrid SARFIMA methods and Fuzzy Time Series (SARFIMAFTS). This framework has been applied successfuly to many important problems instances from large national power load curves to minigrids. After the presentation of this framework and its results, a round table will be carried out with a number of expoents in this research area.

Mini-bio(s):
Electrical Engineer working on projects to expand and improve the National Interconnected System coordinating projects in hydroelectric plants (1.4 GW+) and substations of 500 kV, 345 kV and 138 kV. Product Engineer during 5+ years in the durable goods industry. Lean Six Sigma Green Belt by General Electric. Graduated from the Federal University of Minas Gerais in Electrical Engineering with certification in Process Control. I have obtained by the same institution the titles of Master of Engineering (2011) and Doctor of Engineering (2016). As well as Carlos Drummond de Andrade, my path has been being from Itabira to the Marvelous City!

Jorge Guerra Pires (BISMA/UNIVAQ)
A Crash Course in Biomathematics

Summary:
The tutorial is organized in the following modules:

  • Module 1: getting to know biomathematics- On this module, aiming at people not familiar with biomathematics, we shall shortly present several paradigms and ideas. Starting point: https://www.youtube.com/watch?v=O4J7eAJX1B0.
  • Module 2: Discussions and example with Matlab - On this module and the upcoming one, we shall present examples with the aim at giving rise to discussions. Starting point: https://www.youtube.com/watch?v=Mk7f2hUblWE
  • Module 3: Discussions and example with Matlab - continuation of module 2
  • Module 4: prospective on computational intelligence and biomathematics - On this module we close the tutorial with a selection of problems and issues in on what touches biomathematics and computational intelligence: starting points: Pires (2012, 2014, 2017).

Interested participants are kindly requested to fillout this form.

Mini-bio(s):
I have a bachelor in engineering by the Federal University of Ouro Preto, my master of science in a double-diploma scheme (Erasmus programme) by the University of L’Aquila/Gdansk University of Technology, in mathematical engineering/technical physics, and my PhD by the University of L’Aquila in Information Engineering (ICT). I have been working on computational intelligence since my bachelor, I have started a master by UFRJ in CI, and I have started to work on biomathematics at University of L’Aquila. Since then, I have attended to several events in biomathematics, mainly systems biology, and computational intelligence (my main interest is artificial neural networks). My main interest is teaching/working with biologists and medical doctors, as I did in my PhD, and intend to keep doing so.

Pedro Mário Cruz e Silva (Nvidia Corporation)
New NVIDIA Platform for High-Performance Computing and Artificial Intelligence

Summary:
Deep Learning (DL) is the Machine Learning (ML) technique enabling breakthroughs in several industrial, business, and scientific workflows. Modern AI is the 4th industrial revolution. The new NVIDIA’s Deep Learning platform is providing the computational power demanded by the recent advances in AI. The recently announced Volta architecture of GPUs was specially designed for the High-Performance Computing workloads necessary to train a Deep Neural Network with a huge amount of training data. It’s the first GPU architecture to include Tensor Cores (TC), processing unities designed to high speed Tensor operations. The latest version of the CUDA language (version 9), and NVIDIA SDKs were improved to include specialized and highly optimized algorithm to extract GPUs full potential in DNN training and inference tasks. Large variety of training data can be efficiently used for training including text, audio, images, and video. This new computing model is delivering outstanding results in Computer Vision, Natural Language Processing, Language Translation, Speech Recognition, Recommendation Systems, Logistics, Autonomous Cars, and Robotics.

Mini-bio(s):
Solution Architect.

Heitor Silvério Lopes (UTFP/Câmpus Curitiba)
Programação Genética: Fundamentos e Aplicações

Summary:
A Programação Genética (PG) é um método de computação evolucionária largamente utilizado para problemas interessantes do mundo-real. Basicamente, PG evolui uma população de programas (usualmente representados como árvores complexas), e cada elemento da população representa uma possível solução para um problema de otimização. Há muitas classes de problemas onde PG pode ser aplicada com sucesso, tais como mineração de dados, reconhecimento de padrões, jogos e estratégias de aprendizado. Neste tutorial serão apresentados os fundamentos de PG de uma forma acessível, tal que a audiência inclui tanto alunos de graduação como de pós-graduação. Ao longo do tutorial, diversas aplicações reais serão apresentadas de modo a ilustrar a aplicabilidade de PG.

Mini-bio(s):
Professor Titular do Depto. de Eletrônica da UTFPR. Graduação em Engenharia Eletrônica, Mestrado em Engenharia Biomédica, Doutorado em Engenharia Elétrica/Sistemas de Informação, Pós-doutorado na University of Tennessee.