Thales to 196@lemmy.blahaj.zoneEnglish • 7 days agoCamp Rulesh.itjust.worksimagemessage-square97fedilinkarrow-up1893arrow-down147
arrow-up1846arrow-down1imageCamp Rulesh.itjust.worksThales to 196@lemmy.blahaj.zoneEnglish • 7 days agomessage-square97fedilink
minus-squareSuperblinkfedilinkEnglish-2•edit-26 days agoId say if there is training beforehand, then its “generative AI”
minus-square@brucethemoose@lemmy.worldlinkfedilink2•edit-25 days agoNot a great metric either, as models with simpler output (like text embedding models, which output a single number representing ‘similarity’, or machine vision models to recognize objects) are extensively trained. Another example is NNEDI3, very primitive edge enhancement. Or Languagetool’s tiny ‘word confusion’ model: https://forum.languagetool.org/t/neural-network-rules/2225
Id say if there is training beforehand, then its “generative AI”
Not a great metric either, as models with simpler output (like text embedding models, which output a single number representing ‘similarity’, or machine vision models to recognize objects) are extensively trained.
Another example is NNEDI3, very primitive edge enhancement. Or Languagetool’s tiny ‘word confusion’ model: https://forum.languagetool.org/t/neural-network-rules/2225