"Hello, series of questions regarding the insertion of photos and images.
1 - Is there a differentiation between a photo, an image (for example from a local application), and a screenshot? And can we search and sort in myReach based on these criteria’s?
2 - Following an attempt to insert both a screenshot and a photo taken from my smartphone, it seems that it treats them the same way?
3 - Are the EXIF datas integrated and accessible? So that we can search on these data’s? In particular, it is a pity not to keep the photo shooting date or the screenshot date. The three dates I see seem identical.
4 - Text recognition in images or screenshots is extremely useful. However, how can we access the recognized texts? To copy them, in a correlated note, for example.
5 - Regarding tagging, is there a sort of auto-tagging? With the possibility to read the generated tags. I imagine that AI capabilities may perhaps overlook this concept?
No rush of course for answers… Thank you in advance.
PS : I was playing with another app with the word mind in its name . Yours seems to be galacticly more powerful.
1 - There isn’t really a differentiation between image types, all images are considered Files (whether they’re photos, screenshots, etc). However, if you upload multiple images simultaneously, these are then saved as an Album, which is one way of differentiating and sorting pictures. You will see that each picture in the album has its own Properties as well.
2 - See above. Screenshots and photos are treated the same way.
3 - Thanks for pointing this out, it’s exactly how it should work in the App. I have reported the bug and will let you know as soon as it’s fixed. In principle, these are the 3 dates for every picture you upload:
i) Creation Date – the date and time when you saved it to myReach (ie. when the Node was created)
ii) Last Modified – in case you make changes to the picture after saving it to myReach
iii) File Creation Date – the date when the original picture or screenshot was taken.
Stay tuned for the bug fix on this last one!
4 - When you’re in a Node, click on the 3 dots on the top right of the screen. You will find the option to “Preview Content”. This can be done for pictures as you mention, but also works with documents, websites, videos and audio recordings (so you can transcribe the words to text)! Once you have the extracted content, you can convert it into a Note with the “Convert text into a Note” button at the bottom of the screen.
5 - Yes, if there is an existing Tag that is relevant to the content you saved, it will automatically create a recommended relationship (see image below). You can see a list of other recommended Tags by going to the “+” button next to the search bar in the relationships, and the list of Tags appears under the “Recommended” tab.
We’re currently working on recommendations, and we’ll release an updated and improved version of recommendations shortly, stay tuned!!
@sofia other questions related to your above answers.
I see that the text recognition is working. But assuming that French is not implemented yet, and that I use images with French, the OCR is having difficulties to extract proper French characters, accentuated ones. And maybe the OCR is not using any FR dictionary?
I am also seing strange keywords in the detected tags set, those I could add to the nodes? I suppose that this is known?
Can you just confirm these?
Cheers
Denis Cadamuro
The App is currently optimised for English, but it can also answer questions in other languages including French. In terms of OCR, if some characters or symbols are not recognised, the AI should still understand the overall concept of the text, to answer your questions (you can ask questions to the AI chat in French and it will answer in French).
With regards to the recommendations (detected tags), we’re currently working on improvements of the recommendations in the App and will release an update shortly. This should improve your experience with the detected tags you get.