The fusion of information from sensors with different physical characteristics, such as sight, touch, sound, etc., enhances the understanding of our surroundings and provides the basis for planning, decision-making, and control of autonomous and intelligent machines. The minimal representation approach to multisensor fusion is based on the use of an information measure as a universal yardstick for fusion. Using models of sensor uncertainty, the representation size guides the integration of widely varying types of data and maximizes the information contributed to a consistent interpretation. In this book, the general theory of minimal representation multisensor fusion is developed and applied in a series of experimental studies of sensor-based robot manipulation. A novel application of differential evolutionary computation is introduced to achieve practical and effective solutions to this difficult computational problem.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.