Dynamic combatting of SPAM and phishing attacks
First Claim
1. A system for detecting an abusive activity over a network, comprising:
- a first computer robot (bot) that is configured to perform actions, including;
proactively sending a first message in a chat room;
receiving a response to the first message; and
if the response indicates an abusive activity in the chat room, determining a source of the response, and transmitting the response and the determined source of the response over the network;
a second computer robot that is configured to perform actions, including;
changing a status for the second computer robot to offline within the chat room without advertising an account identifier that uniquely identifies the second computer robot to the first computer robot such that a presence of the second computer robot is unacknowledged in the chat room;
listening by the second computer robot for a second message; and
if the second message is received by the second computer robot at the unadvertised account identifier, identifying the second message as an abusive message; and
a network device that is configured to perform actions, including;
receiving the response and the determined source of the response;
dynamically revising a filter to block another message based on the response or the determined source; and
dynamically retraining the first computer robot based, in part, on the response, the determined source, and information from the blocking filter, and the second computer robot, such that the first computer robot learns and adapts based on shared collective information.
5 Assignments
0 Petitions
Accused Products
Abstract
A self training set of robots are configured to proactively search for selective communication abuses over a network. Robots may enter a chat room to proactively send messages. The robots then analyze patterns and/or content of a received message for potential abuse. Robots may also passively reside on/off line without publishing their network address. If a message is received, the message may be interpreted to be SPAM/SPIM. Robots may also perform a variety of other actions, such as access websites, and analyze received messages to determine if the messages indicate abuse. If abuse is detected, information may also be obtained to enable blocking or filtering of future messages from the sender, or access to/from an abusive website. The information also may be used to retrain robots, so that the robots may learn from and share their collective knowledge of abusive actions.
74 Citations
20 Claims
-
1. A system for detecting an abusive activity over a network, comprising:
-
a first computer robot (bot) that is configured to perform actions, including; proactively sending a first message in a chat room; receiving a response to the first message; and if the response indicates an abusive activity in the chat room, determining a source of the response, and transmitting the response and the determined source of the response over the network; a second computer robot that is configured to perform actions, including; changing a status for the second computer robot to offline within the chat room without advertising an account identifier that uniquely identifies the second computer robot to the first computer robot such that a presence of the second computer robot is unacknowledged in the chat room; listening by the second computer robot for a second message; and if the second message is received by the second computer robot at the unadvertised account identifier, identifying the second message as an abusive message; and a network device that is configured to perform actions, including; receiving the response and the determined source of the response; dynamically revising a filter to block another message based on the response or the determined source; and dynamically retraining the first computer robot based, in part, on the response, the determined source, and information from the blocking filter, and the second computer robot, such that the first computer robot learns and adapts based on shared collective information. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A non-transitory, processor readable storage medium having computer-executable instructions, wherein the execution of the computer-executable instructions provides for detecting an abusive activity by enabling actions, including:
-
generating at least one first robot that is configured to proactively send a first message within a chat room, to receive and to analyze a response to the first message, and if the response indicates an abusive activity, determining a source of the response; receiving the response, and the determined source of the response, from the at least one first robot; generating at least one second robot that is configured to perform actions, including; changing a status for the at least one second robot within the chat room such that the at least one second robot does not advertise an account identifier that uniquely identifies the at least one second robot to the at least one first robot such that a presence of the at least one second robot within the chat room is unacknowledged; listening for a message sent to the unadvertised account identifier, absent sending a message by the at least one second robot; and when a message is detected that is sent to the unadvertised account identifier, identifying the message sent to the unadvertised account identifier as an abusive message; dynamically revising a filter to block another message based on the determined source or the response; and dynamically training the at least one first robot and the at least one second robot based on the response and the determined source, and information about the revised filter, such that the at least one first robot and the at least one second robot learn and adapt based on shared collective information. - View Dependent Claims (9, 10, 11, 12, 13)
-
-
14. A method in a first computer robot for detecting abusive activity within a messaging application over a network, operating within at least one computer network device and that performs actions comprising:
-
deploying passively within a first messaging application, without advertising an account identifier that uniquely identifies the first computer robot to at least a second computer robot such that a presence of the first computer robot within the message application is unacknowledged in the first messaging application; listening for a message directed to the first computer robot, absent sending of a message within the message application; if a message is received within the first messaging application, directed to the unadvertised first computer robot'"'"'s account identifier; identifying the message as abusive activity; determining a source of the message; and providing the message, and the determined source of the message to a network device for use in filtering another message from the determined source, and to further train the first computer robot based on shared collective information from a plurality of computer robots, such that the first computer robot is enabled to monitor for another message from the determined source in a second messaging application. - View Dependent Claims (15, 16, 17)
-
-
18. A method, operating within at least one computer network device, of detecting an abusive activity over a network within a messaging application, comprising:
-
generating a first computer robot that is configured to proactively send a first message within a first messaging application, to receive and to analyze a response to the first message, and if the response indicates an abusive activity, determining a source of the response; generating a second computer robot that is configured to perform actions, including; logging the second computer robot into an account for the first messaging application without advertising an account identifier that uniquely identifies the second computer robot to the first computer robot such that a presence of the second computer robot is unacknowledged on the network within the first messaging application; listening by the second computer robot for a second message; and if the second message is received by the second computer robot at the unadvertised account identifier, identifying the second message as an abusive message, and further determining a second source for the second message; dynamically revising a filter to block another message from the determined source or based on the response and further revising the filter to block another message from the second source; and dynamically training the first computer robot based on the response, the determined source, and second source such that the first computer robot learns and adapts based on shared collective information, such that the first computer robot is enabled to detect another abusive activity based, in part, on the response or the determined source in another messaging application. - View Dependent Claims (19, 20)
-
Specification