Why input is required strictly in JSON format?
Any thoughts why this isn't simplified for the most common use cases? I have txt/docx/pdf/image file as input and all the complexities or converting to structured format should be abstracted away from user. Isn't this a very common use case that you have several unstructured or different types of files present in blob containers, let the API crawls through it and present its results. Similar to what Azure Search does? Also, why don't you unify these services, meaning the text analytics in Azure ML recognizes only 3 NER types, whereas Azure cognitive services recognizes more. Shouldn't this all be internally using same engine behind the scenes?
JSON is a common format for a number of technologies enabling integration with a common contract. Building separate connector for each scenario is challenging given increasing number of integration points.
To simplify, look into Microsoft Flow to do the preprocessing using their templates. There are also a number of libraries in .NET, Python and others that you should be able to use.
Yes, ideally all should be using the same engine and look the same. We are working on the unification.