Big tech companies such as Google and Facebook are reluctant to admit many of the so-called AI algorithms they are selling to military and security services do not work effectively. This relates to "face recognition" and "large language translation". Their widespread use on social media can cause serious misunderstandings.
The limitations in these particular techniques have been known throughout their development period which covers 1950s through to date. Face recognition was a much analysed topic first mentioned in Japanese work published in the Pattern Recognition Society bulletin in 1970. Large language transaltion was developed as a volume translation service involving multiple language-pairs at the European Commission. As a result of this intensive application across many subject matters, the failures, associated with specific language-pairs and subject context, were well established. In professional environments, human translators post-edit machine translated content to publishable quality and to avoid misrepresentation. The European Commission (which has been researching and using machine translation since the 1960s) found that,
"as long as translation can be restricted in subject matter or by document type ... improvements in quality can be achieved" (Hutchins, 2005).
Big tech companies do not provide such a precautionary filter.
Those who attempt to explain such facts are ostracized and treated as whistle-blowers and are sacked and pursued by these corporations who continue to earn a considerable amount of money by selling faulty systems or online services. ( see
Former Google ethical AI team co-lead Timnit Gebru exposes unethical AI by Google & Big Tech )
)
|