This book aims to solve the discrete implementation problems of continuous-time neural network models while improving the performance of neural networks by using various Zhang Time Discretization (ZTD) formulas. The authors summarize and present the systematic derivations and complete research of ZTD formulas from special 3S-ZTD formulas to general NS-ZTD formulas. These finally lead to their proposed discrete-time Zhang neural network (DTZNN) algorithms, which are more efficient, accurate, and elegant. This book will open the door to scientific and engineering applications of ZTD formulas and neural networks, and will be a major inspiration for studies in neural network modeling, numerical algorithm design, prediction, and robot manipulator control. The book will benefit engineers, senior undergraduates, graduate students, and researchers in the fields of neural networks, computer mathematics, computer science, artificial intelligence, numerical algorithms, optimization, robotics, and simulation modeling.
This book aims to solve the discrete implementation problems of continuous-time neural network models while improving the performance of neural networks by using various Zhang Time Discretization (ZTD) formulas. The authors summarize and present the systematic derivations and complete research of ZTD formulas from special 3S-ZTD formulas to general NS-ZTD formulas. These finally lead to their proposed discrete-time Zhang neural network (DTZNN) algorithms, which are more efficient, accurate, and elegant. This book will open the door to scientific and engineering applications of ZTD formulas and neural networks, and will be a major inspiration for studies in neural network modeling, numerical algorithm design, prediction, and robot manipulator control. The book will benefit engineers, senior undergraduates, graduate students, and researchers in the fields of neural networks, computer mathematics, computer science, artificial intelligence, numerical algorithms, optimization, robotics, and simulation modeling.
This book introduces readers to the fundamentals of and recent advances in federated learning, focusing on reducing communication costs, improving computational efficiency, and enhancing the security level. Federated learning is a distributed machine learning paradigm which enables model training on a large body of decentralized data. Its goal is to make full use of data across organizations or devices while meeting regulatory, privacy, and security requirements. The book starts with a self-contained introduction to artificial neural networks, deep learning models, supervised learning algorithms, evolutionary algorithms, and evolutionary learning. Concise information is then presented on multi-party secure computation, differential privacy, and homomorphic encryption, followed by a detailed description of federated learning. In turn, the book addresses the latest advances in federate learning research, especially from the perspectives of communication efficiency, evolutionary learning, and privacy preservation. The book is particularly well suited for graduate students, academic researchers, and industrial practitioners in the field of machine learning and artificial intelligence. It can also be used as a self-learning resource for readers with a science or engineering background, or as a reference text for graduate courses.
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.