A great example of a “slow” brute force attack #ossec

The last couple of days a lot of malicious servers got caught by my Ossec HIDS/IPS and have been send to my iptables jail. However, I’ve been seeing one host (185.93.185.239) evading my traps for days. It has been nocking on my door in a slow pace, slow enough not to trigger a brute force detection (causing six events in a small period of time).

So I changed the brute force detection window to 86400 seconds, to see if that helps.

The result:

image

He got caught and went to jail πŸ™‚

iptables -L

image

Today’s mood ..

HOWTO: Install OSSEC / WAZUH hids #security #ossec #wazuh

The last few nights I have been working on this project. At first it seemed easy, but a lot of information on the internet is either too old, not well maintained, incomplete, or whatever. Needless to say that these kind of projects are complex, and the IT environment changes fast, so I cannot point any fingers at anyone, and I won’t. By the time YOU find this blog because you run into the same issues, this posting might be outdated as well πŸ™‚

After succesfully testing this at home I implemented it at work in under an hour πŸ™‚

image

For this project I started at http://wazuh-documentation.readthedocs.io/en/latest/ and as usual I ended looking all over the internet. But I took notes, maybe it’ll help you out. Have fun.
(I prepared this posting yesterday and scheduled it for automatic posting at 15:30, dunno if that works)

Installation manual single-host Wazuh HIDS on Debian Jessie 8 (16Gb ram, 40Gb hdd), including ELK Stack (Kibana interface on secured proxy)
Please realize, these are my notes, I didn’t clean it up.

base debian jessie server install with standard system utilities
[already installed]

After install
logon as root
apt-get update
sudo apt-get install openssh-server openssh-client mc curl sudo gcc make git libssl-dev apt-transport-https
su
adduser wazuh sudo
visudo

[insert] wazuh ALL=(ALL:ALL) ALL [after line with %sudo]
su wazuh

cd ~
mkdir ossec_tmp
cd ossec_tmp
git clone -b stable https://github.com/wazuh/ossec-wazuh.git
cd ossec-wazuh
sudo ./install.sh
[answer the questions]
sudo /var/ossec/bin/ossec-control start

[Skip the agent part for now. http://wazuh-documentation.readthedocs.io/en/latest/first_steps.html]

sudo nano /etc/apt/sources.list.d/java-8-debian.list
[insert]
deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main
deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main
[save]

sudo apt-key adv –keyserver keyserver.ubuntu.com –recv-keys EEA14886
sudo apt-get update
sudo apt-get install oracle-java8-installer
wget -qO – https://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add –
echo “deb https://packages.elasticsearch.org/logstash/2.1/debian stable main” | sudo tee -a /etc/apt/sources.list
sudo apt-get update && sudo apt-get install logstash

[skip the forwarder, I don’t need it since the whole HIDS runs on a single server]

sudo cp ~/ossec_tmp/ossec-wazuh/extensions/logstash/01-ossec-singlehost.conf /etc/logstash/conf.d/
sudo cp ~/ossec_tmp/ossec-wazuh/extensions/elasticsearch/elastic-ossec-template.json /etc/logstash/

sudo curl -O “http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz”
sudo gzip -d GeoLiteCity.dat.gz && sudo mv GeoLiteCity.dat /etc/logstash/
sudo usermod -a -G ossec logstash

wget -qO – https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add –
echo “deb https://packages.elastic.co/elasticsearch/2.x/debian stable main” | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list
sudo apt-get update && sudo apt-get install elasticsearch
sudo update-rc.d elasticsearch defaults 95 10

sudo nano /etc/elasticsearch/elasticsearch.yml
[find the following variables, uncomment them, and rename them as you wish]
cluster.name: ossec
node.name: ossec_node1
uncomment bootstrap.mlockall: true
[save]

sudo nano /etc/security/limits.conf
[insert at the end]
elasticsearch – nofile 65535
elasticsearch – memlock unlimited
[save]

sudo nano /etc/default/elasticsearch
[edit and uncomment]
# ES_HEAP_SIZE – Set it to half your system RAM memory
ES_HEAP_SIZE=8g
MAX_LOCKED_MEMORY=unlimited
MAX_OPEN_FILES=65535
[save]

sudo nano /usr/lib/systemd/system/elasticsearch.service
[uncomment]
LimitMEMLOCK=infinity
[save]

sudo service elasticsearch start
sudo systemctl enable elasticsearch

cd ~/ossec_tmp/ossec-wazuh/extensions/elasticsearch/ && curl -XPUT “http://localhost:9200/_template/ossec/” -d “@elastic-ossec-template.json”
sudo update-rc.d logstash defaults 95 10
sudo service logstash start

wget -qO – https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add –
echo “deb http://packages.elastic.co/kibana/4.5/debian stable main” | sudo tee -a /etc/apt/sources.list
sudo apt-get update && sudo apt-get install kibana

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service

sudo service kibana start

In your browser

http://IP_OF_HOST_OR_HOSTNAME:5601/app/kibana

[wait for the yellow light, it will turn green, if not refresh after 2 minutes]
goto http://IP_OF_HOST_OR_HOSTNAME:5601
[configure]
– Check “Index contains time-based events”.
– Insert Index name or pattern: ossec-*
– On “Time-field name” list select @timestamp option.
– Click on “Create” button.
– You should see the fields list with about ~72 fields.
– Go to “Discover” tap on top bar buttons.

– Click at top bar on “Settings”.
– Click on “Objects”.
– Then click the button “Import”
– Select the file ~/ossec_tmp/ossec-wazuh/extensions/kibana/kibana-ossecwazuh-dashboards.json
– Optional: You can download the Dashboards JSON File directly from the repository `here`_.
[Refresh the Kibana page and you should be able to load your imported Dashboards.]

su
mkdir -p /var/ossec/update/ruleset && cd /var/ossec/update/ruleset
wget https://raw.githubusercontent.com/wazuh/ossec-rules/stable/ossec_ruleset.py
chmod +x /var/ossec/update/ruleset/ossec_ruleset.py
/var/ossec/update/ruleset/ossec_ruleset.py –help

Update ruleset
./var/ossec/update/ruleset/ossec_ruleset.py/ossec_ruleset.py -a
This can be cronned in the future.

crontab -e (as root)
[insert]
@weekly root cd /var/ossec/update/ruleset && ./ossec_ruleset.py -s
[save]

NGINX Proxy

sudo apt-get update
sudo apt-get install nginx apache2-utils

sudo rm /etc/nginx/sites-available/default
sudo nano /etc/nginx/sites-available/default
[insert]
server {
listen 80 default_server; #Listen on IPv4
listen [::]:80; #Listen on IPv6
return 301 https://$host$request_uri;
}

server {
listen *:443;
listen [::]:443;
ssl on;
ssl_certificate /etc/pki/tls/certs/kibana-access.crt;
ssl_certificate_key /etc/pki/tls/private/kibana-access.key;
server_name “Server Name”;
access_log /var/log/nginx/kibana.access.log;
error_log /var/log/nginx/kibana.error.log;

location / {
auth_basic “Restricted”;
auth_basic_user_file /etc/nginx/conf.d/kibana.htpasswd;
proxy_pass http://127.0.0.1:5601;
}
}
[save]

cd ~
[document your passwords for the next part securely!!!]]
sudo openssl genrsa -des3 -out server.key 1024
sudo openssl req -new -key server.key -out server.csr

sudo cp server.key server.key.org
sudo openssl rsa -in server.key.org -out kibana-access.key
sudo openssl x509 -req -days 365 -in server.csr -signkey server.key -out kibana-access.crt
sudo mkdir -p /etc/pki/tls/certs
sudo cp kibana-access.crt /etc/pki/tls/certs/
sudo mkdir -p /etc/pki/tls/private/
sudo cp kibana-access.key /etc/pki/tls/private/

sudo htpasswd -c /etc/nginx/conf.d/kibana.htpasswd kibanaadmin [note] make up your own kibanaadmin username if you want to]

sudo nano /opt/kibana/config/kibana.yml
[edit and uncomment]
server.host: “127.0.0.1”
[save]

sudo service kibana start
sudo service nginx restart

browse: https://your_host_or_ip:443

After a reboot logstash has to be started manualy, I did not spent much time on the issue since I hardly ever need to reboot. I will update when I solve it, meanwhile any tips are very welcome in the comments …
Item added.
########################################
Drop me a note if you find an error, thank me not πŸ™‚

@sucurisecurity killed my wordpress site (twice)

w00t w00t! mod_fcgid: stderr: PHP Fatal error: Cannot redeclare class SucuriScanSiteCheck

Yesterday Sucuri for WordPress was updated to 1.7.18, my website immediatly went offline. I got notified about it by Jetpack.
Problem: I cannot deactivate the Sucuri plugin via the WordPress dash since the site is down and I can’t login πŸ™‚
Solution: Logon via FTP and rename the Sucuri plugin directory to .old. My site went back up immediatly.

I tried to revert to my latest backup, which worked, and Sucuri got automaticly updated to version 1.7.19. My website went down again .. and I could start all over again, disabling the plugin over FTP

For now I leave the plugin as is. Hopefully Sucuri comes with a solution soon. I’ll wait ..

Update: after posting this to Twitter Sucuri immediately contacted me and offered to provide a solution. As we speak we are working together to get this solved. Thumbs up for a company like that.

Update2: it’s working again, but the source of the troubles is yet to be determined.

EDIT: Prune backups made with Relax and Recover #rear #linux

People keep asking me how I make incremental backups with rear, so I edited my posting about pruning backups with rear and inserted my config.

Prune backups …

If you don’t want to look that up you find my /etc/rear/local.conf below.

BACKUP=NETFS
OUTPUT=ISO
CDROM_SIZE=20
BACKUP_URL=nfs://xxx.xxx.xxx.xxx/volume2/LinuxDR/rear
ISO_DIR=/mnt/ISO
ISO_PREFIX=”rear-$HOSTNAME”
BACKUP_PROG_EXCLUDE=( β€˜/tmp/*’ β€˜/dev/shm/*’ β€˜/mnt/*’ β€˜/media/*’ $VAR_DIR/output/\* )
BACKUP_SELINUX_DISABLE=1
BACKUP_TYPE=incremental
FULLBACKUPDAY=Fri

I also rsync my backups to an external (Raspberry Pi 1B) server, will post about that later on.

Have fun backing up!

Why would one use #fail2ban ..

This is why πŸ™‚

Chain INPUT (policy ACCEPT)
target prot opt source destination
fail2ban-postfix-sasl tcp — anywhere anywhere multiport dports smtp
fail2ban-dovecot-pop3imap tcp — anywhere anywhere multiport dports pop3,pop3s,imap2,imaps
fail2ban-pureftpd tcp — anywhere anywhere multiport dports ftp
fail2ban-ssh tcp — anywhere anywhere multiport dports ssh

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination

Chain fail2ban-dovecot-pop3imap (1 references)
target prot opt source destination
RETURN all — anywhere anywhere

Chain fail2ban-postfix-sasl (1 references)
target prot opt source destination
REJECT all — 157.122.148.247 anywhere reject-with icmp-port-unreachable
REJECT all — 193.189.117.162 anywhere reject-with icmp-port-unreachable
REJECT all — 193.189.117.151 anywhere reject-with icmp-port-unreachable
REJECT all — aus1345893.lnk.telstra.net anywhere reject-with icmp-port-unreachable
REJECT all — 187-54-83-70.gnace704.e.brasiltelecom.net.br anywhere reject-with icmp-port-unreachable
REJECT all — 41.65.158.124 anywhere reject-with icmp-port-unreachable
REJECT all — 185.125.4.196 anywhere reject-with icmp-port-unreachable
RETURN all — anywhere anywhere

Chain fail2ban-pureftpd (1 references)
target prot opt source destination
REJECT all — localhost anywhere reject-with icmp-port-unreachable
REJECT all — 46.105.49.146 anywhere reject-with icmp-port-unreachable
REJECT all — 221.5.49.36 anywhere reject-with icmp-port-unreachable
RETURN all — anywhere anywhere

Chain fail2ban-ssh (1 references)
target prot opt source destination
REJECT all — 119.164.254.50 anywhere reject-with icmp-port-unreachable
REJECT all — 113.12.184.238 anywhere reject-with icmp-port-unreachable
REJECT all — customer-187-157-170-18-sta.uninet-ide.com.mx anywhere reject-with icmp-port-unreachable
REJECT all — 171.234.8.73 anywhere reject-with icmp-port-unreachable
REJECT all — 121.15.13.237 anywhere reject-with icmp-port-unreachable
REJECT all — 117.3.103.61 anywhere reject-with icmp-port-unreachable
REJECT all — 103.4.231.200 anywhere reject-with icmp-port-unreachable
REJECT all — 91.224.160.10 anywhere reject-with icmp-port-unreachable
REJECT all — host-156.196.1.35-static.tedata.net anywhere reject-with icmp-port-unreachable
REJECT all — 222.76.215.239 anywhere reject-with icmp-port-unreachable
REJECT all — 27.72.65.228 anywhere reject-with icmp-port-unreachable
REJECT all — 116.31.116.10 anywhere reject-with icmp-port-unreachable
REJECT all — 123.238.175.59.broad.wh.hb.dynamic.163data.com.cn anywhere reject-with icmp-port-unreachable
REJECT all — 104.148.116.66 anywhere reject-with icmp-port-unreachable
REJECT all — 124.158.5.115 anywhere reject-with icmp-port-unreachable
REJECT all — ip-198.12-150-220.ip.secureserver.net anywhere reject-with icmp-port-unreachable
REJECT all — ubuntu-dev.sofia-connect.net anywhere reject-with icmp-port-unreachable
REJECT all — 13.84.218.172 anywhere reject-with icmp-port-unreachable
REJECT all — 45.119.154.176 anywhere reject-with icmp-port-unreachable
REJECT all — static.vnpt.vn anywhere reject-with icmp-port-unreachable
REJECT all — ec2-52-37-48-252.us-west-2.compute.amazonaws.com anywhere reject-with icmp-port-unreachable
REJECT all — 103.207.39.18 anywhere reject-with icmp-port-unreachable
REJECT all — 185.110.132.201 anywhere reject-with icmp-port-unreachable
REJECT all — oisin.gainpromotion.net anywhere reject-with icmp-port-unreachable
REJECT all — 222.186.56.14 anywhere reject-with icmp-port-unreachable
REJECT all — vps07.snthostings.com anywhere reject-with icmp-port-unreachable
REJECT all — 79.96.151.203.sta.inet.co.th anywhere reject-with icmp-port-unreachable
REJECT all — 222.186.21.224 anywhere reject-with icmp-port-unreachable
REJECT all — static-49-107-226-77.ipcom.comunitel.net anywhere reject-with icmp-port-unreachable
REJECT all — dynamic.vdc.vn anywhere reject-with icmp-port-unreachable
REJECT all — ruslango94.zomro.com anywhere reject-with icmp-port-unreachable
REJECT all — host149-173-static.9-188-b.business.telecomitalia.it anywhere reject-with icmp-port-unreachable
REJECT all — wim-luche9.fastnet.it anywhere reject-with icmp-port-unreachable
REJECT all — ec2-52-26-61-71.us-west-2.compute.amazonaws.com anywhere reject-with icmp-port-unreachable
REJECT all — static.vdc.vn anywhere reject-with icmp-port-unreachable
RETURN all — anywhere anywhere

Let me tell you bout the birds and the bees …

and the flowers and the treeeeees!
Skip the birds, and the flowers πŸ™‚

Leaves us with bees and trees.

image

image

image

image

image

image

#Linux Use grep to find what you need

“grep” is a wonderful tool to find things in long lists of data. I’ll share an example:

I want to use a specific option in rsync, but the manpage is rather long:

rsync --help,

rsync version 3.0.9 protocol version 30
Copyright (C) 1996-2011 by Andrew Tridgell, Wayne Davison, and others.
Web site: http://rsync.samba.org/
Capabilities:
64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints,
socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace,
append, ACLs, xattrs, iconv, symtimes

rsync comes with ABSOLUTELY NO WARRANTY. This is free software, and you
are welcome to redistribute it under certain conditions. See the GNU
General Public Licence for details.

rsync is a file transfer program capable of efficient remote update
via a fast differencing algorithm.

Usage: rsync [OPTION]… SRC [SRC]… DEST
or rsync [OPTION]… SRC [SRC]… [USER@]HOST:DEST
or rsync [OPTION]… SRC [SRC]… [USER@]HOST::DEST
or rsync [OPTION]… SRC [SRC]… rsync://[USER@]HOST[:PORT]/DEST
or rsync [OPTION]… [USER@]HOST:SRC [DEST]
or rsync [OPTION]… [USER@]HOST::SRC [DEST]
or rsync [OPTION]… rsync://[USER@]HOST[:PORT]/SRC [DEST]
The ‘:’ usages connect via remote shell, while ‘::’ & ‘rsync://’ usages connect
to an rsync daemon, and require SRC or DEST to start with a module name.

Options
-v, –verbose increase verbosity
-q, –quiet suppress non-error messages
–no-motd suppress daemon-mode MOTD (see manpage caveat)
-c, –checksum skip based on checksum, not mod-time & size
-a, –archive archive mode; equals -rlptgoD (no -H,-A,-X)
–no-OPTION turn off an implied OPTION (e.g. –no-D)
-r, –recursive recurse into directories
-R, –relative use relative path names
–no-implied-dirs don’t send implied dirs with –relative
-b, –backup make backups (see –suffix & –backup-dir)
–backup-dir=DIR make backups into hierarchy based in DIR
–suffix=SUFFIX set backup suffix (default ~ w/o –backup-dir)
-u, –update skip files that are newer on the receiver
–inplace update destination files in-place (SEE MAN PAGE)
–append append data onto shorter files
–append-verify like –append, but with old data in file checksum
-d, –dirs transfer directories without recursing
-l, –links copy symlinks as symlinks
-L, –copy-links transform symlink into referent file/dir
–copy-unsafe-links only “unsafe” symlinks are transformed
–safe-links ignore symlinks that point outside the source tree
-k, –copy-dirlinks transform symlink to a dir into referent dir
-K, –keep-dirlinks treat symlinked dir on receiver as dir
-H, –hard-links preserve hard links
-p, –perms preserve permissions
-E, –executability preserve the file’s executability
–chmod=CHMOD affect file and/or directory permissions
-A, –acls preserve ACLs (implies –perms)
-X, –xattrs preserve extended attributes
-o, –owner preserve owner (super-user only)
-g, –group preserve group
–devices preserve device files (super-user only)
–specials preserve special files
-D same as –devices –specials
-t, –times preserve modification times
-O, –omit-dir-times omit directories from –times
–super receiver attempts super-user activities
–fake-super store/recover privileged attrs using xattrs
-S, –sparse handle sparse files efficiently
-n, –dry-run perform a trial run with no changes made
-W, –whole-file copy files whole (without delta-xfer algorithm)
-x, –one-file-system don’t cross filesystem boundaries
-B, –block-size=SIZE force a fixed checksum block-size
-e, –rsh=COMMAND specify the remote shell to use
–rsync-path=PROGRAM specify the rsync to run on the remote machine
–existing skip creating new files on receiver
–ignore-existing skip updating files that already exist on receiver
–remove-source-files sender removes synchronized files (non-dirs)
–del an alias for –delete-during
–delete delete extraneous files from destination dirs
–delete-before receiver deletes before transfer, not during
–delete-during receiver deletes during the transfer
–delete-delay find deletions during, delete after
–delete-after receiver deletes after transfer, not during
–delete-excluded also delete excluded files from destination dirs
–ignore-errors delete even if there are I/O errors
–force force deletion of directories even if not empty
–max-delete=NUM don’t delete more than NUM files
–max-size=SIZE don’t transfer any file larger than SIZE
–min-size=SIZE don’t transfer any file smaller than SIZE
–partial keep partially transferred files
–partial-dir=DIR put a partially transferred file into DIR
–delay-updates put all updated files into place at transfer’s end
-m, –prune-empty-dirs prune empty directory chains from the file-list
–numeric-ids don’t map uid/gid values by user/group name
–timeout=SECONDS set I/O timeout in seconds
–contimeout=SECONDS set daemon connection timeout in seconds
-I, –ignore-times don’t skip files that match in size and mod-time
–size-only skip files that match in size
–modify-window=NUM compare mod-times with reduced accuracy
-T, –temp-dir=DIR create temporary files in directory DIR
-y, –fuzzy find similar file for basis if no dest file
–compare-dest=DIR also compare destination files relative to DIR
–copy-dest=DIR … and include copies of unchanged files
–link-dest=DIR hardlink to files in DIR when unchanged
-z, –compress compress file data during the transfer
–compress-level=NUM explicitly set compression level
–skip-compress=LIST skip compressing files with a suffix in LIST
-C, –cvs-exclude auto-ignore files the same way CVS does
-f, –filter=RULE add a file-filtering RULE
-F same as –filter=’dir-merge /.rsync-filter’
repeated: –filter=’- .rsync-filter’
–exclude=PATTERN exclude files matching PATTERN
–exclude-from=FILE read exclude patterns from FILE
–include=PATTERN don’t exclude files matching PATTERN
–include-from=FILE read include patterns from FILE
–files-from=FILE read list of source-file names from FILE
-0, –from0 all *-from/filter files are delimited by 0s
-s, –protect-args no space-splitting; only wildcard special-chars
–address=ADDRESS bind address for outgoing socket to daemon
–port=PORT specify double-colon alternate port number
–sockopts=OPTIONS specify custom TCP options
–blocking-io use blocking I/O for the remote shell
–stats give some file-transfer stats
-8, –8-bit-output leave high-bit chars unescaped in output
-h, –human-readable output numbers in a human-readable format
–progress show progress during transfer
-P same as –partial –progress
-i, –itemize-changes output a change-summary for all updates
–out-format=FORMAT output updates using the specified FORMAT
–log-file=FILE log what we’re doing to the specified FILE
–log-file-format=FMT log updates using the specified FMT
–password-file=FILE read daemon-access password from FILE
–list-only list the files instead of copying them
–bwlimit=KBPS limit I/O bandwidth; KBytes per second
–write-batch=FILE write a batched update to FILE
–only-write-batch=FILE like –write-batch but w/o updating destination
–read-batch=FILE read a batched update from FILE
–protocol=NUM force an older protocol version to be used
–iconv=CONVERT_SPEC request charset conversion of filenames
-4, –ipv4 prefer IPv4
-6, –ipv6 prefer IPv6
–version print version number
(-h) –help show this help (-h is –help only if used alone)

Use “rsync –daemon –help” to see the daemon-mode command-line options.
Please see the rsync(1) and rsyncd.conf(5) man pages for full documentation.
See http://rsync.samba.org/ for updates, bug reports, and answers

********************************************
I only want to see options that let me delete something before/after/during sync on the target, but I am a lazy sysadmin and the list above is way to long. So:

rsync --help | grep delete

–del an alias for –delete-during
–delete delete extraneous files from destination dirs
–delete-before receiver deletes before transfer, not during
–delete-during receiver deletes during the transfer
–delete-delay find deletions during, delete after
–delete-after receiver deletes after transfer, not during
–delete-excluded also delete excluded files from destination dirs
–ignore-errors delete even if there are I/O errors
–max-delete=NUM don’t delete more than NUM files

And I am happy πŸ™‚

Two small updates :-)

About my Trachycarpus Fortunei palm tree:

image
Blossoms!

About my personal life:

image
Blossoms too <3 πŸ™‚ So at the moment I am busy with other things, but some technical stuff is upcoming too (hint: seafile cloud server on RPI).

2705746 / 2416 :-)