Sindbad~EG File Manager
3
�h>`�; � @ s d ddgZ ddlmZ ddlmZ ddlZddlZddlZddlZddlZ ddl
Z
ddlmZ ddl
Z
ddlZddlmZ dd lmZ dd
lT dd
lT dd
lT yeed� W n ek
r� de_Y nX eej� d
�Zdd� ZG dd� de�ZG dd � d e
j�ZdS )�AuditSocketReceiverThread�AuditRecordReceiver�
verify_avc� )�str)�objectN)�_thread)�
cmp_to_key)�
get_config)�*� AUDIT_EOEi( � c C sp | j jd ks| jjd krdS tj| j jkrltjtjdt| j f � tjtjd| jj� � dd l}|j d� dS )NFzUsetroubleshoot generated AVC, exiting to avoid recursion, context=%s, AVC scontext=%szaudit event
%sr T)
Zscontext�typeZtcontext�
my_context�syslog�LOG_ERR�audit_event�format�sys�exit)�avcr � r �/usr/lib/python3.6/avc_audit.pyr 7 s
c @ sx e Zd ZdZdZdd� Zdd� Zdd� Zd d
� Zdd� Z d
d� Z
dd� Zdd� Zddd�Z
ddd�Zdd� Zdd� ZdS )r aO
The audit system emits messages about a single event
independently. Thus one single auditable event may be composed
from one or more individual audit messages. Each audit message is
prefixed with a unique event id, which includes a timestamp. The
last audit message associated with an event is not marked in any
fashion. Audit messages for a specific event may arrive
interleaved with audit messages for other events. It is the job of
higher level software (this code) to assemble the audit messages
into events. The AuditEvent class is used for assembly. When a new
event id is seen a new AuditEvent object is created, then
every time an audit message arrives with that event id it is added
to that object. The AuditEvent object contains the timestamp
associated with the audit event as well as other data items useful
for processing and handling the event.
The audit system does not tell us when the last message belonging
to an event has been emitted so we have no explicit way of knowing
when the audit event has been fully assembled from its constituent
message parts. We use the heuristic if a sufficient length of
time has expired since we last saw a message for this event, then
it must be complete
Thus when audit events are created we place them in a cache where
they will reside until their time to live has expired at which
point we will assume they are complete and emit the event.
Events are expired in the flush_cache() method. The events
resident in the cache are sorted by their timestamps. A time
threshold is established. Any events in the cache older than the
time threshold are flushed from the cache as complete events.
When should flushes be performed? The moment when a new message is
added would seem a likely candidate moment to perform a sweep of
the cache. But this is costly and does not improve how quickly
events are expired. We could wait some interval of time (something
much greater than how long we expect it would take for messages
percolate) and this has good behavior, except for the following
case. Sometimes messages are emitted by audit in rapid
succession. If we swept the cache once a second then the cache may
have grown quite large. Since it is very likely that any given audit
event is complete by the time the next several events start
arriving we can optimize by tracking how many messages have
arrived since the last time we swept the cache.
The the heuristic for when to sweep the cache becomes:
If we've seen a sufficient number of messages then sweep -or- if
a sufficient length of time has elapsed then we sweep
Note that when audit messages are injected via log file scanning
elapsed wall clock time has no meaning relative to when to perform
the cache sweep. However, the timestamp for an event remains a
critical factor when deciding if an event is complete (have we
scanned far enough ahead such we're confident we won't see any
more messages for this event?). Thus the threshold for when to
expire an event from the cache during static log file scanning is
determined not by wall clock time but rather by the oldest
timestamp in the cache (e.g.there is enough spread between
timestamps in the cache its reasonable to assume the event is
complete). One might ask in the case of log file scanning why not
fill the cache until EOF is reached and then sweep the cache?
Because in log files it is not unusual to have thousands or tens
of thousands of events and the cache would grown needlessly
large. Because we have to deal with the real time case we already
have code to keep only the most recent events in the cache so we
might as well use that logic, keep the code paths the same and
minimize resource usage.
g{�G�zt?c C s$ d| _ d| _i | _g | _| j� d S )N� r )�
flush_size�flush_count�cache�events�reset_statistics)�selfr r r �__init__� s
zAuditRecordReceiver.__init__c C s
t | j�S )N)�lenr )r r r r �num_cached_events� s z%AuditRecordReceiver.num_cached_eventsc C s d| _ d| _d S )Nr )�max_cache_length�event_count)r r r r r � s z$AuditRecordReceiver.reset_statisticsc C s t � }|| jt|j�<