I have updated the plugin, the changes are as follows:
- changed the format the data is returned in to JSON.
- made it possible to have way more control over results
- there is now modes of search - text, image & sound eg. text will provide answers in text, image will provide answers with images of the object, graphs, maps simple snippets of dictionary type extracts and so on, sound will provide url’s to listen to the requested sound. with the new changes you can not only select returned pods but you can select anything within a pod or even let your app know the amount of results or assumptions made.
- added a quick question mode - the answer will be a to the point no extra’s reply.
- added quick question mode formatted for SPOKEN REPLY’S! - the returned answer will be formatted as if its spoken to you.
- added an element! - text reader, the element will allow typed text to be read with a button press… BUT its power comes when you drop the width and height to 0px and use the new workflow action inside “element actions” to have it read a text… say maybe the result of a quick question formatted as a SPOKEN REPLY…
I will be making the speech element editable shortly and also be adding a new plugin devoted to text to speech also.
This update relates to this post - [New Plugin] WolframAlpha