Video Indexer
Welcome to the Video Indexer Forum
Categories
API – Any ideas or feedback pertaining to features or enhancements to Video Indexer API.
General – Any general ideas or suggestions related to Video Indexer.
Portal – Any ideas or feedback regarding the web portal for Video Indexer
Video AI – Any ideas or suggestions regarding Video AI features in Video Indexer.
Attention!
We have moved our Customer Feedback & Ideas for Azure Cognitive Services portal to the Azure Feedback Forum.
-
New Search Engines
Wow! This is an amazing service. I used a less capable system before, but I love this new service. So many possibilities with this product. The speed was excellent. UI was awesome. Very nice work!!
How about a new search engine to locate text within millions of videos, OR find all the videos with certain text in it !!!!!!!! so cool.2 votes -
Can't reach the API from Web Portal?
It took me two hours to find the Breakdown API; the help steps at (https://docs.microsoft.com/en-us/azure/cognitive-services/video-indexer/video-indexer-use-apis#subscribe-to-the-api) seemed to imply I was supposed to go to the VI Web Portal, then click on a Products tab that would bring me there, but I couldn't find the link if there is one.
I can access the breakdown API from https://videobreakdown.portal.azure-api.net/products, but it's not really clear that you're supposed to reach it from a direct link rather than the portal.
1 vote -
Store and get Face/OCR or object detect position data, X, Y, Height, Width for render it on the Player
It looks like there is no x,y,width,height data in Video Indexer API. Customer want to get it to draw rectangle over the player.
Video Indexer:
https://docs.microsoft.com/en-us/azure/cognitive-services/video-indexer/video-indexer-output-jsonAzure Media Analytics: Face Detector:
https://docs.microsoft.com/en-us/azure/media-services/media-services-face-and-emotion-detection6 votes -
APi for editing breakdown data
Currently we can edit via VI UI breakdown data, such as OCR texts, captioned texts, and so on.
Do you have a plan to expose those "Edit" functions (not "Edit" UI) as APIs to update a specific part of breakdown data?Thanks,
-Shige1 vote -
merge faces
It's useful to merge multiple face to single face in the portal.
6 votes -
Player Widget customization, such as adding AMP plugin to Widget
Can you add AMP plugin extension to Player Widget returned by API?
1 vote -
Localization for search, sentimental analytics
There are some different with English and other content. We know some part of feature of based service like Azure Media Analytics and Cognitive Services don't support many language. Please consider the easy to use international scenario especially search, localization support is helpful.
1 voteSearch is supported in all languages that are supported for translation
-
Integrate cognitive service customer services feature
To improve initial output, this integration make more powerful story - especially, Custom Vision Services, Custom Speech Service, Translator Hubs.
That allow better first output accuracy by each domain topic.5 votes -
Arrange Azure Media Indexer 2 output charactor to normal language sentence for more accurate translation
The Azure Media Indexer 2 output each line of caption file for screen. That broke sentence. We need to merge some them to create manful sentence for more accurate translation.
If we support Translatoin capability in this API, please consider to this feature.4 votesThis is how translation is processed
-
Integrate Face Blurring into the Video Indexer UX/Widget
Create a Widget or Interface ability to select a specific face within a video to blur. Create a new flag to "Blur All Faces" upon Upload of a new Video. Create a flag that can be saved to "Always Blur This Selected Face" for any future video that contains that face.
10 votesThank you for your feedback, we are now reviewing the feature.
-
Please add hebrew language - Video Indexer
Please add hebrew language, we realy need it in the university.
2 votesThank you for reaching out.
We are looking into adding Hebrew support, we will update when it’s available. -
I can not delete the video.
ErrorType":"BREAKDOWNNOTFOUND","Message":"Playlist 'ce83a1cc66' was not found."}
I tried to delete the video in the video indexer, so I created the code to automatically erase it using py code, but why do I get an error message in the cmd code window when only one movie is found?
1 vote -
Landmark Recognition For Videos
Looking for the ability to recognise landmarks (e.g. Eiffel Towel, Sydney Harbour Bridge, etc..) present in videos that are uploaded and stored within Video Indexer.
I believe this capability exists within the Computer Vision API today (9,000 landmarks when I last checked)
8 votes -
Import existing edited transcripts or caption files
The ability to upload an already edited transcript will enable those with existing libraries to move over content without the need for further human editing.
Ideally VI would apply the timings, and insert them in their appropriate place on the Transcript panel.
3 votes -
Create an Azure Media Services 'Media Processor' as a wrapper for Video Indexer.
Media Processors in Azure Media Services are the components you can string together in a pipeline to build a media workflow, including transcoding and media analytics, etc. In order for Video Indexer to more cleanly integrate with existing media workflows, there should be a VI Media Processor that can be added to regular Azure Media Services jobs. This way, I can create an Azure Media Services job in a few lines of code that will upload a video, transcode a mezzanine format such as ProRes into a multi-bitrate mp4, generate a shortened version for a video thumbnail using the video…
5 votesMany of the insights provided by VI are available in AMS RPv3 audio and video analysis.
VI also allows providing an AMS asset as an input for indexing -
Use web hooks for indexing job completion notification
When I submit a video to be indexed via UploadVideo API method, I'd like to provide a web hook url as a callback mechanism to indicate when a video is done processing. Processing state could also be provided via this web hook mechanism as well, not just to indicate completion.
The advantage to this approach is a decoupling of the submission of indexing jobs from the post completion handling. This is important when using Azure Functions since you don't want your functions to be running (polling for progress) for as long as it takes the video indexing job to complete.…8 votesThis is available through the callbackUrl parameter you can send in the upload video request – https://api-portal.videoindexer.ai/docs/services/Operations/operations/Upload-Video
-
Disable quarantine for strictly private video
Videos that are used for preparation of subtitles for commercial movies/series are sometimes quarantined by the VI (i.e. Dr.House). The VI could be used to provide a transcript of the video which will be manually corrected/adapted to a high quality subtitle file. However in this scenario the VI shall not put commercial movies/series into quarantine because people preparing subtitles always get video material even before it is shown in theaters or on TV and they would like to send it to the VI to get the transcript.
One way to avoid putting videos into quarantine is to introduce "Strictly private"…
2 votes -
Provide an API function that returns the list of supported languages
It would be useful to get the list of supported languages to be able to display available languages in the UI of the application using the VI.
2 votesohadjas responded
Can you please clarify – list of supported languages for transcript or translation?
-
Allow setting language to "None" to skip speech recognition for unsupported languages
If the audio language is not supported by the VI then the breakdown might still be useful but speech recognition does not make any sense in such case. Setting the language to "None" or "Other" or "Unsupported" would be useful.
3 votes -
Add edit ability for transcripts to correct auto generated transcripts
Inevitably there are going to be words that are misidentified by speakers and it would be great if there was the ability to correct the speech to text manually. Perhaps this could be used to feed back into the model for training as well.
This would make the transcript feature much more useful as basically it could function as an assisted transcription tool where all an editor would have to do would be to supervise the generated transcription correct the parts that were not correct and save.
2 votes
- Don't see your idea?