Questões de Concurso Comentadas sobre inglês para tce-sp

Foram encontradas 12 questões

Resolva questões gratuitamente!

Junte-se a mais de 4 milhões de concurseiros!

Q2322001 Inglês
Is It Live, or Is It Deepfake?


It’s been four decades since society was in awe of the quality of recordings available from a cassette recorder tape. Today we have something new to be in awe of: deepfakes. Deepfakes include hyperrealistic videos that use artificial intelligence (AI) to create fake digital content that looks and sounds real. The word is a portmanteau of “deep learning” and “fake.” Deepfakes are everywhere: from TV news to advertising, from national election campaigns to wars between states, and from cybercriminals’ phishing campaigns to insurance claims that fraudsters file. And deepfakes come in all shapes and sizes — videos, pictures, audio, text, and any other digital material that can be manipulated with AI. One estimate suggests that deepfake content online is growing at the rate of 400% annually.


There appear to be legitimate uses of deepfakes, such as in the medical industry to improve the diagnostic accuracy of AI algorithms in identifying periodontal disease or to help medical professionals create artificial patients (from real patient data) to safely test new diagnoses and treatments or help physicians make medical decisions. Deepfakes are also used to entertain, as seen recently on America’s Got Talent, and there may be future uses where deepfake could help teachers address the personal needs and preferences of specific students.


Unfortunately, there is also the obvious downside, where the most visible examples represent malicious and illegitimate uses. Examples already exist.


Deepfakes also involve voice phishing, also known as vishing, which has been among the most common techniques for cybercriminals. This technique involves using cloned voices over the phone to exploit the victim’s professional or personal relationships by impersonating trusted individuals. In March 2019, cybercriminals were able to use a deepfake to fool the CEO of a U.K.-based energy firm into making a US$234,000 wire transfer. The British CEO who was victimized thought that the person speaking on the phone was the chief executive of the firm’s German parent company. The deepfake caller asked him to transfer the funds to a Hungarian supplier within an hour, emphasizing that the matter was extremely urgent. The fraudsters used AI-based software to successfully imitate the German executive’s voice. […]


What can be done to combat deepfakes? Could we create deepfake detectors? Or create laws or a code of conduct that probably would be ignored?


There are tools that can analyze the blood flow in a subject’s face and then compare it to human blood flow activity to detect a fake. Also, the European Union is working on addressing manipulative behaviors.


There are downsides to both categories of solutions, but clearly something needs to be done to build trust in this emerging and disruptive technology. The problem isn’t going away. It is only increasing.


Authors


Nit Kshetri, Bryan School of Business and Economics, University of North Carolina at Greensboro, Greensboro, NC, USA


Joanna F. DeFranco, Software Engineering, The Pennsylvania State University, Malvern, PA, USA Jeffrey Voas, NIST, USA


Adapted from: https://www.computer.org/csdl/magazine/co/2023/07/10154234/ 1O1wTOn6ynC
The word “downsides” in “There are downsides to both categories” (7th paragraph) means: 
Alternativas
Q2320214 Inglês

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

According to the text, attacks, scams and data theft are actions that should be:
Alternativas
Q2320213 Inglês

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

The newspaper headline expresses the agency’s:
Alternativas
Q498397 Inglês
Leia o texto para responder a questão.

                                      E-mail Spoofing

           E-mail spoofing is the forgery of an e-mail header so that  the message appears to have originated from someone or  somewhere other than the actual source. Distributors of spam  often use spoofing in an attempt to get recipients to open,  and possibly even respond to, their solicitations. Spoofing can  be used legitimately. However, spoofing anyone other than  yourself is illegal in some jurisdictions.
           E-mail spoofing is possible because Simple Mail Transfer  Protocol (SMTP), the main protocol used in sending e-mail,  does not include an authentication mechanism. Although  an SMTP service extension (specified in IETF RFC 2554)  allows an SMTP client to negotiate a security level with a mail
server, this precaution is not often taken. If the precaution is  not taken, anyone with the requisite knowledge can connect  to the server and use it to send messages. To send spoofed  e-mail, senders insert commands in headers that will alter  message information. It is possible to send a message that
appears to be from anyone, anywhere, saying whatever the  sender wants it to say. Thus, someone could send spoofed  e-mail that appears to be from you with a message that you  didn't write.
          Although most spoofed e-mail falls into the “nuisance" category and requires little action other than deletion, the  more malicious varieties can cause serious problems and  security risks. For example, spoofed e-mail may purport  to be from someone in a position of authority, asking for  sensitive data, such as passwords, credit card numbers, or  other personal information – any of which can be used for a  variety of criminal purposes. One type of e-mail spoofing, self- sending spam, involves messages that appear to be both to  and from the recipient.

                                                               (http://searchsecurity.techtarget.com/definition/em.... Adaptado)
An example of sensitive data mentioned in the last paragraph is
Alternativas
Q498396 Inglês
Leia o texto para responder a questão.

                                      E-mail Spoofing

           E-mail spoofing is the forgery of an e-mail header so that  the message appears to have originated from someone or  somewhere other than the actual source. Distributors of spam  often use spoofing in an attempt to get recipients to open,  and possibly even respond to, their solicitations. Spoofing can  be used legitimately. However, spoofing anyone other than  yourself is illegal in some jurisdictions.
           E-mail spoofing is possible because Simple Mail Transfer  Protocol (SMTP), the main protocol used in sending e-mail,  does not include an authentication mechanism. Although  an SMTP service extension (specified in IETF RFC 2554)  allows an SMTP client to negotiate a security level with a mail
server, this precaution is not often taken. If the precaution is  not taken, anyone with the requisite knowledge can connect  to the server and use it to send messages. To send spoofed  e-mail, senders insert commands in headers that will alter  message information. It is possible to send a message that
appears to be from anyone, anywhere, saying whatever the  sender wants it to say. Thus, someone could send spoofed  e-mail that appears to be from you with a message that you  didn't write.
          Although most spoofed e-mail falls into the “nuisance" category and requires little action other than deletion, the  more malicious varieties can cause serious problems and  security risks. For example, spoofed e-mail may purport  to be from someone in a position of authority, asking for  sensitive data, such as passwords, credit card numbers, or  other personal information – any of which can be used for a  variety of criminal purposes. One type of e-mail spoofing, self- sending spam, involves messages that appear to be both to  and from the recipient.

                                                               (http://searchsecurity.techtarget.com/definition/em.... Adaptado)
In the last sentence of the second paragraph – Thus, someone could send spoofed e-mail that appears to be from you with a message that you didn’t write. – the word “thus” introduces a
Alternativas
Respostas
1: C
2: C
3: C
4: E
5: A