Questões de Concurso Militar ITA 2019 para Aluno - Matemática, Física, Química, Português e Inglês

Foram encontradas 8 questões

Q1287871 Inglês
A questão refere -se ao tex to destacado:

“Of course they're fake videos, everyone can see they're not real. All the same, they really did say those things, didn't they?” These are the words of Vivienne Rook, the fictional politician played by Emma Thompson in the brilliant dystopian BBC TV drama Years and Years. The episode in question, set in 2027, tackles the subject of “deepfakes” - videos in which a living person's face and voice are digitally manipulated to say anything the programmer wants.
Rook perfectly sums up the problem with these videos - even if you know they are fake, they leave a lingering impression. And her words are all the more compelling because deepfakes are real and among us already. Last year, several deepfake porn videos emerged online, appearing to show celebrities such as Emma Watson, Gal Gadot and Taylor Swift in explicit situations.
[...]
In some cases, the deepfakes are almost indistinguishable from the real thing - which is particularly worrying for politicians and other people in the public eye. Videos that may initially have been created for laughs could easily be misinterpreted by viewers. Earlier this year, for example, a digitally altered video appeared to show Nancy Pelosi, the speaker of the US House of Representatives, slurring drunkenly through a speech. The video was widely shared on Facebook and YouTube, before being tweeted by President Donald Trump with the caption: “PELOSI STAMMERS THROUGH NEWS CONFERENCE”. The video was debunked, but not before it had been viewed millions of times. Trump has still not deleted the tweet, which has been retweeted over 30,000 times.
The current approach of social media companies is to filter out and reduce the distribution of deepfake videos, rather than outright removing them - unless they are pornographic. This can result in victims suffering severe reputational damage, not to mention ongoing humiliation and ridicule from viewers. “Deepfakes are one of the most alarming trends I have witnessed as a Congresswoman to date,” said US Congresswoman Yvette Clarke in a recent article for Quartz. “If the American public can be made to believe and trust altered videos of presidential candidates, our democracy is in grave danger. We need to work together to stop deepfakes from becoming the defining feature of the 2020 elections.”
Of course, it's not just democracy that is at risk, but also the economy, the legal system and even individuals themselves. Clarke warns that, if deepfake technology continues to evolve without a check, video evidence could lose its credibility during trials. It is not hard to imagine it being used by disgruntled ex-lovers, employees and random people on the internet to exact revenge and ruin people's reputations. The software for creating these videos is already widely available.
Fonte: Curtis, Sophie. https://www.mirror.co.uk/tech/deepfake-videos-creepy-new-internet-18289900. Adaptado. Acessado em Agosto/2019.
De acordo com a congressista Yvette Clarke, pelos diversos riscos representados pelos vídeos deepfake, é necessário
Alternativas
Q1287872 Inglês
A  questão refere -se ao texto destacado:

About seven years ago, three researchers at the University of Toronto built a system that could analyze thousands of photos and teach itself to recognize everyday objects, like dogs, cars and flowers. The system was so effective that Google bought the tiny start-up these researchers were only just getting off the ground. And soon, their system sparked a technological revolution. Suddenly, machines could “see” in a way that was not possible in the past.
This made it easier for a smartphone app to search your personal photos and find the images you were looking for. It accelerated the progress of driverless cars and other robotics. And it improved the accuracy of facial recognition services, for social networks like Facebook and for the country's law enforcement agencies. But soon, researchers noticed that these facial recognition services were less accurate when used with women and people of color. Activists raised concerns over how companies were collecting the huge amounts of data needed to train these kinds of systems. Others worried these systems would eventually lead to mass surveillance or autonomous weapons.
Fonte: Matz, Cade. Seeking Ground Rules for A. I. www.nytimes.com, 01/03/2019. Adaptado. Acessado em Agosto/2019.)
De acordo com as informações do texto, selecione a alternativa que melhor complete a afirmação: The new system proved to be less precise when
Alternativas
Q1287873 Inglês
A  questão refere -se ao texto destacado:

About seven years ago, three researchers at the University of Toronto built a system that could analyze thousands of photos and teach itself to recognize everyday objects, like dogs, cars and flowers. The system was so effective that Google bought the tiny start-up these researchers were only just getting off the ground. And soon, their system sparked a technological revolution. Suddenly, machines could “see” in a way that was not possible in the past.
This made it easier for a smartphone app to search your personal photos and find the images you were looking for. It accelerated the progress of driverless cars and other robotics. And it improved the accuracy of facial recognition services, for social networks like Facebook and for the country's law enforcement agencies. But soon, researchers noticed that these facial recognition services were less accurate when used with women and people of color. Activists raised concerns over how companies were collecting the huge amounts of data needed to train these kinds of systems. Others worried these systems would eventually lead to mass surveillance or autonomous weapons.
Fonte: Matz, Cade. Seeking Ground Rules for A. I. www.nytimes.com, 01/03/2019. Adaptado. Acessado em Agosto/2019.)
Analise as afirmações de I a IV em destaque.
I. Ativistas manifestaram preocupação em relação à forma como as empresas estavam coletando enormes quantidades de dados para treinar sistemas de reconhecimento. II. A Universidade de Toronto construiu um sistema ético de Inteligência Artificial para reconhecimento de imagens. III. Uma das preocupações de ativistas era a possibilidade de tais sistemas conduzirem a vigilância em massa ou armamento autônomo. IV. Empresas privadas de tecnologia, como Google, e redes digitais, como Facebook, junto com algumas agências governamentais, chegaram a um consenso quanto a uma ética da Inteligência Artificial. V. Algumas leis foram desenvolvidas por alguns grupos específicos de pessoas para decidir sobre o futuro da Inteligência Artificial.
De acordo com o texto, estão corretas apenas:
Alternativas
Respostas
4: E
5: D
6: C