Made in choregraphe's lib a link to latest version with sudo ln /lib/x86_64-linux-gnu/libz.so.1.2.11 libz.so.1.2 and pointed libz.so.1 to this libz.so.1.2, instead of directly copy of libz.so.1.2.8.As indicated here, the libz.so provided with choregraphe is not needed.Trying to start Choregraphe with command /opt/Aldebaran Robotics/Choregraphe Suite 2.1/bin/choregraphe_launcher, but this fails because libpng16.so.16 required libz.so.1 v1.2.9, which is not.The webots is loaded (nao_indoors.wbt).According to the webinterface, Sam is running NAOqi v 2.8.5.10.Now Choregraphe is able to connect to Sam, although I get the warning that Choregraphe 2.5.10.7 may not compatible with NAOqi-2.8.Disabled the DNT-mode with mv nf and rebooted. Sam was not visible wirelessly, so connected to Sam with an ethernet cable.He build this nice demonstration with his wife Betty. ![]() In a 2h video of the History of AI Claude Shannon shows in 1952 how a mouse (called Theseus) can explore a maze ( 52:19-59:17).Though it you want more complex speech interactions, I'd suggest a full blown chatbot (Dialogflow, IBM Watson, et c. If you plan to use a NAOqi QiChat chatbot, you could use the naoqi python apis to run that and just connect external speech to text and text to speech services to it. It would allow you to simulate the robot making gestures and moving through the world, but you would have to write your own custom code (ROS nodes, in python or C++) to process the audio, do speech recogition, and output speech (connected up to a mic and speakers you have for example). But it's really not designed for audio simulation either. The best currently available simulation environment for Pepper/NAO is the ROS Gazebo Stack. Webots is not supported any more, and I've never had any luck getting it set up. The text interaction can be used to test the flow of your dialogs, but won't allow you to test the nuances of speech recognition properly though. cannot be tested on a simulated robot - This module is only available on a real robot, you cannot test it on a simulated robot. When using a virtual robot, said text can be visualized in Choregraphe Robot View and Dialog panel. From the docs hereĪCAPELA, microAITalk and Nuance engines are only available on the real robot. The ALTextToSpeech and ALSpeechRecognition APIs don't work on the virtual robot unforunately. I'd appreciate any hint on whether Webots is even suitable for dialogues (seems to be mostly focussed on movement) or advice for other suitable simulations. Webots using ROS controller: There is no official support for Mac, and the recommended installation for ROS Kinetics has not yet worked for me.The robot and world simulation from naoqisim (which is also no longer sustained) seem to run fine. I could not figure out how to make the Speaker() class work. Webots (using Python controllers): The most promising approach so far, but there is basically no documentation on how to write NAO controllers. ![]() If I'm not missing something, dialogues are only simulated in a written chat - so I type the speech input, getting 'speech bubbles' as a response Choregraphe: The included simulation works fine, but is very restricted in its abilities. ![]() Speech recognition is the most important feature here, but simulation of other features that add more realism (like voice) would be appreciated too. The goal is to model dialogues of different complexity, also involving gestures. Due to Covid-19, I don't have access to a physical NAO and need to work with simulations.
0 Comments
Leave a Reply. |