Recent research in machine learning (ML) and neurophysiology has focused in the development of highly intelligent algorithms, utilizing information processing principles of the human brain. Deep learning is inspired by the architecture of the cerebral cortex and it has attracted the attention of many artificial intelligence (AI) scientists. It is the dominating AI approach in specific domains (e.g., image–voice classification, object detection) regardless of its requirements in high computational power and in high volume of data.

This is the editorial of the “Brain Inspired Computing and Machine Learning” Special Issue (SI) of the Neural Computing and Applications Springer Journal. The response of the scientific community has been significant, as many original research papers have been submitted for consideration. Totally, 11 papers were accepted out of 20, after going through a peer-review process. All of them have significant elements of novelty and they are introducing interesting modeling approaches or algorithms, inspired by the biological processes of the human brain.

The first paper is entitled “Operational Neural Networks” and it is authored by Serkan Kiranyaz, Department of Electrical Engineering, the Qatar University, Turker Ince, Department of Electrical & Electronics Engineering, Izmir University of Economics, Turkey, Alexandros Iosifidis Department of Engineering, Aarhus University, Denmark and Moncef Gabbouj, Department of Computing Sciences, Tampere University, Finland. The authors are introducing a new heterogeneous machine learning model, called operational neural network (NN). It can encapsulate neurons with any set of operators, in order to boost diversity and to learn highly complex and multimodal functions or spaces. This can result in minimal network complexity, requiring small volume of training data. A comparative analysis with convolutional NN models is performed to prove the efficiency of the proposed architecture.

The second paper is authored by Ioannis Livieris, Department of Mathematics, University of Patras, Greece. The title is “An Advanced Active set L-BFGS Algorithm for Training Weight Constrained Neural Networks.” This research proposes a novel Advanced Active set Limited MemoryBroyden, Fletcher, Goldfarb, Shanno (AA-L-BFGS) algorithm, for efficiently training weight-constrained neural networks. It approximates the curvature of the error function with high-order accuracy. This is achieved by utilizing the theoretically advanced secant condition, as it was proposed by Livieris and Pintelas [1]. Moreover, the global convergence of the proposed algorithm is established provided that the line search satisfies the modified Armijo condition [2].

Antonios Karatzoglou, Robert Bosch GmbH, Chassis Systems Control, Advance Engineering Germany, Nikolai Schnell and Michael Beiglfrom, Karlsruhe Institute of Technology Germany, are the authors of the third paper. It is entitled “Applying Depthwise Separable and Multi-Channel Convolutional Neural Networks of Varied Kernel Size on Semantic Trajectories.” The authors explore the performance of convolutional neural networks (CNN) with respect to their capability in modeling semantic trajectories and predicting future locations in a location prediction scenario. The proposed approach comprises of three major parts. Initially, a standard single-channel approach is evaluated and compared with a feedforward NN, a recurrent NN and a long short-term memory one. Then, the paper describes a multi-channel CNN that relies on a set of kernels of varied size. The third part evaluates the use of depthwise separable convolution with respect to training performance and test accuracy. It has been shown that the introduced CNN-based models are performing better than the aforementioned systems, in terms of accuracy, and thus, they are able to model semantic trajectories and to predict future human movement patterns.

An interesting research entitled “Identify Data Streams Anomalies by Evolving Spiking Restricted Boltzmann Machines” is authored by Lining Xing, Central South University of Forestry and Technology Changsha, China, Konstantinos Demertzis, Democritus University of Thrace Greece, and Jinghui Yang, College of Engineering, Shanghai Polytechnic University, Shanghai China. This research introduces an interesting and novel real-time Evolving Spiking Restricted Boltzmann Machine (e-SREBOM) approach, for efficient Anomaly Detection in Data Streams. Testing has proved enhanced classification accuracy and minimization of computation resources’ requirements.

Myrianthi Hadjicharalambous, Department of Mechanical Engineering, University of Cyprus, Marios Polycarpou and Christos G. Panayiotou, both from the Department of Electrical and Computer Engineering, University of Cyprus are the authors of the research paper “Neural Network-based Construction of Online Prediction Intervals.” Assessment of prediction uncertainty in artificial neural network applications is performed using prediction intervals (PRI). Nevertheless, little work has been done on their development for online learning. This research proposes a hybrid approach that employs NN to directly estimate PRI for batch and online approximation scenarios. It provides high-quality PRI that combine high coverage probability for future observations with small and thus informative interval widths. A comparison has been performed with three well-established respective methods. The presented approach is extended to adaptive approximation and it has proved its strong capacity for reliably representing prediction uncertainty in real-world regression applications.

The sixth paper is on “Real-Time Diameter of the Fetal Aorta from Ultrasound” by Nicolo Savioli and Enrico Grisan, Department of Biomedical Engineering, King’s College London, UK, Silvia Visentin and Erich Cosmi, Department of Woman and Child Health, University Hospital of Padova Italy, Giovanni Montana, WMG International Digital Laboratory, University of Warwick, Coventry, UK, and Pablo Lamata, Department of Biomedical Engineering, King’s College London, UK. This research aims to automate the task of measuring the vascular diameter of the fetal abdominal aorta from ultrasound images. It is introducing a NN architecture consisting of three blocks: a convolutional NN for the extraction of imaging features, a convolution gated recurrent unit for exploiting the temporal redundancy of the signal and a regularized loss function named CyclicLoss, to impose prior knowledge about the periodicity of the observed signal. The solution is investigated with a cohort of 25 ultrasound sequences acquired during the third-trimester pregnancy check, and with 1000 synthetic sequences.

Gerrit A. Ecke, Sebastian A. Bruijns, Johannes Holscher, Fabian A. Mikulasch, Thede Witschel, Aristides B. Arrenberg, Hanspeter A. Mallotare, Department of Biology, University of Tubingen, Germany, are the authors of the seventh paper. The title is “Sparse Coding Predicts Optic Flow Specificities of Zebrafish Pretectal Neurons.” Zebrafish pretectal neurons exhibit specificities for large-field optic flow patterns associated with rotatory or translatory body motion. This research investigates the hypothesis that these specificities reflect the input statistics of natural optic flow. Realistic motion sequences were generated using computer graphics, simulating self-motion in an underwater scene. Local retinal motion was estimated with a motion detector and encoded in four populations of directionally tuned retinal ganglion cells, represented as two signed input variables. This activity was then used as input into one of three learning networks, namely sparse coding NN (competitive learning), PCA whitening with subsequent sparse coding and backpropagation NN (supervised learning). All simulations developed optic flow specificities, which are comparable to those found in the literature of neurophysiology. However, relative frequencies of the various neuronal responses were best modeled by the sparse coding approach without whitening.

An interesting research is described in the eighth paper entitled “Neural Networks with Block Diagonal Inner Product Layers: A Look at Neural Network Architecture through the Lens of Random Matrices.” The authors are Amy Lynn Nesky and Quentin F. Stout, University of Michigan, Ann Arbor, USA. There are two difficulties that continue to burden deep learning researchers and users: The fact that neural networks are cumbersome tools and that the activity of the fully connected (FC) layers remains mysterious. This paper describes a research that contributes to these two issues by considering a modified version of the FC layer, known as the Block Diagonal Inner Product layer. These modified layers have weight matrices that are block diagonal, turning a single FC layer into a set of densely connected neuron groups; they can be achieved by either initializing a purely block diagonal weight matrix or by iteratively pruning off diagonal block entries. This idea is a natural extension of group, or depthwise separable, convolutional layers. This method condenses network storage and speeds up the run time without significant adverse effect on the testing accuracy, addressing the first problem.

The ninth paper is entitled “A Two-Stage Approach for Automatic Liver Segmentation with Faster R-CNN and DeepLab” and it is authored by Wei Tang, Dongsheng Zou, Su Yang, Jing Shi, Jingpei Dan, from the Chongqing University, China, and by Guowu Song, from Sichuan Changhong Electronics Co. Ltd, Sichuan, China. The authors are proposing a novel two-stage automatic liver segmentation method. They combine faster R-CNN and DeepLab, in order to form an automatic segmentation method that eliminates the need of manual feature extraction and user’s interaction. The proposed approach can accurately segment liver, with small volume and complex contour. Compared with other concurrent works, the introduced model has good results for three reasons: First, of all, deep learning is employed in liver detection and segmentation stages to minimize the impact of human intervention in segmentation. Second, the segmentation stage uses residual learning and the hop structure is applied to fuse the multi-level information of the network. The useful information of the middle layer is reused to extract more abundant and significant context information, which can balance the two contradictory goals of location and classification of liver semantic segmentation. Thirdly, the proposed model inputs the processed detection result image, into the model. This narrows the range of subsequent semantic segmentations.

Savvas Karatsiolis and Christos N. Schizas, Department of Computer Science, University of Cyprus and Nicolai Petkov, Department of Intelligent Systems Group, Bernoulli Institute for Mathematics Computer Science and Artificial Intelligence, University of Groningen, Netherlands, are the authors of the tenth paper “Modular Domain-to-Domain Translation Network.” The authors present a novel approach for domain translation without incorporating a generative adversarial network (GAN) in its architecture. This approach employs a hierarchical architecture which comprises of individually trained modules. The architecture is trained as a whole (fine-tuned) and it is able to achieve results that are at least comparable to the ones provided by the state-of-the-art models that use one or more GANs in their architecture. By not using GANs, the proposed architecture avoids deficiencies related to their training, such as high instability and mode collapse.

Finally the 11th paper “A deep learning classifier for sentence classification in biomedical and computer science abstracts” is authored by Sergio Goncalves, Paulo Cortez, Algoritmi Centre, Department of Information Systems, University of Minho, Portugal and Sergio Moro Instituto Universitario de Lisboa (ISCTE-IUL), ISTAR-IUL, Lisbon, Portugal. This paper presents a novel deep learning architecture, for the classification of scientific abstract sentences (background, objectives, methods, results, conclusions) which offers significant assistance in scientific database querying. Moreover, it performs literature reviews and it supports the writing of new abstracts. The proposed Word-BiGRU architecture, assumes word embedding a convolutional layer and a bidirectional gated recurrent unit (GRU). Using a large sentence corpus, related with 20 k abstracts from the biomedical domain, we have obtained high-quality classification performances, with weight-averaged precision, recall and F1-score values around 91%. These results compare favorably against a state-of-the-art bidirectional long short-term memory model.

We wish to express our appreciation and deep gratitude to the Editor-in-Chief of the Springer Neural Computing and Applications Journal, Professor John Macintyre, for his full support and for offering us the privilege to edit a special issue in this high-quality journal. We have tried to make a minor and timely contribution to the existing literature. We hope that the novel approaches presented in this publication will be appreciated by the international scientific community. We are certain that they will inspire further research on neural computing and applications in several and diverse domains.