This year, 25 hours …

Happy birthday to me 🙂
image

A great example of a “slow” brute force attack #ossec

The last couple of days a lot of malicious servers got caught by my Ossec HIDS/IPS and have been send to my iptables jail. However, I’ve been seeing one host (185.93.185.239) evading my traps for days. It has been nocking on my door in a slow pace, slow enough not to trigger a brute force detection (causing six events in a small period of time).

So I changed the brute force detection window to 86400 seconds, to see if that helps.

The result:

image

He got caught and went to jail 🙂

iptables -L

image

Today’s mood ..

HOWTO: Install OSSEC / WAZUH hids #security #ossec #wazuh

The last few nights I have been working on this project. At first it seemed easy, but a lot of information on the internet is either too old, not well maintained, incomplete, or whatever. Needless to say that these kind of projects are complex, and the IT environment changes fast, so I cannot point any fingers at anyone, and I won’t. By the time YOU find this blog because you run into the same issues, this posting might be outdated as well 🙂

After succesfully testing this at home I implemented it at work in under an hour 🙂

image

For this project I started at http://wazuh-documentation.readthedocs.io/en/latest/ and as usual I ended looking all over the internet. But I took notes, maybe it’ll help you out. Have fun.
(I prepared this posting yesterday and scheduled it for automatic posting at 15:30, dunno if that works)

Installation manual single-host Wazuh HIDS on Debian Jessie 8 (16Gb ram, 40Gb hdd), including ELK Stack (Kibana interface on secured proxy)
Please realize, these are my notes, I didn’t clean it up.

base debian jessie server install with standard system utilities
[already installed]

After install
logon as root
apt-get update
sudo apt-get install openssh-server openssh-client mc curl sudo gcc make git libssl-dev apt-transport-https
su
adduser wazuh sudo
visudo

[insert] wazuh ALL=(ALL:ALL) ALL [after line with %sudo]
su wazuh

cd ~
mkdir ossec_tmp
cd ossec_tmp
git clone -b stable https://github.com/wazuh/ossec-wazuh.git
cd ossec-wazuh
sudo ./install.sh

sudo /var/ossec/bin/ossec-control start

[Skip the agent part for now. http://wazuh-documentation.readthedocs.io/en/latest/first_steps.html]

sudo nano /etc/apt/sources.list.d/java-8-debian.list
[insert]
deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main
deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main
[save]

sudo apt-key adv –keyserver keyserver.ubuntu.com –recv-keys EEA14886
sudo apt-get update
sudo apt-get install oracle-java8-installer
wget -qO – https://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add –
echo “deb https://packages.elasticsearch.org/logstash/2.1/debian stable main” | sudo tee -a /etc/apt/sources.list
sudo apt-get update && sudo apt-get install logstash

[skip the forwarder, I don’t need it since the whole HIDS runs on a single server]

sudo cp ~/ossec_tmp/ossec-wazuh/extensions/logstash/01-ossec-singlehost.conf /etc/logstash/conf.d/
sudo cp ~/ossec_tmp/ossec-wazuh/extensions/elasticsearch/elastic-ossec-template.json /etc/logstash/

sudo curl -O “http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz”
sudo gzip -d GeoLiteCity.dat.gz && sudo mv GeoLiteCity.dat /etc/logstash/
sudo usermod -a -G ossec logstash

wget -qO – https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add –
echo “deb https://packages.elastic.co/elasticsearch/2.x/debian stable main” | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list
sudo apt-get update && sudo apt-get install elasticsearch
sudo update-rc.d elasticsearch defaults 95 10

sudo nano /etc/elasticsearch/elasticsearch.yml
[find the following variables, uncomment them, and rename them as you wish]
cluster.name: ossec
node.name: ossec_node1
uncomment bootstrap.mlockall: true
[save]

sudo nano /etc/security/limits.conf
[insert at the end]
elasticsearch – nofile 65535
elasticsearch – memlock unlimited
[save]

sudo nano /etc/default/elasticsearch
[edit and uncomment]
# ES_HEAP_SIZE – Set it to half your system RAM memory
ES_HEAP_SIZE=8g
MAX_LOCKED_MEMORY=unlimited
MAX_OPEN_FILES=65535
[save]

sudo nano /usr/lib/systemd/system/elasticsearch.service
[uncomment]
LimitMEMLOCK=infinity
[save]

sudo service elasticsearch start
sudo systemctl enable elasticsearch

cd ~/ossec_tmp/ossec-wazuh/extensions/elasticsearch/ && curl -XPUT “http://localhost:9200/_template/ossec/” -d “@elastic-ossec-template.json”
sudo update-rc.d logstash defaults 95 10
sudo service logstash start

wget -qO – https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add –
echo “deb http://packages.elastic.co/kibana/4.5/debian stable main” | sudo tee -a /etc/apt/sources.list
sudo apt-get update && sudo apt-get install kibana

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service

sudo service kibana start

In your browser

http://IP_OF_HOST_OR_HOSTNAME:5601/app/kibana

[wait for the yellow light, it will turn green, if not refresh after 2 minutes]
goto http://IP_OF_HOST_OR_HOSTNAME:5601
[configure]
– Check “Index contains time-based events”.
– Insert Index name or pattern: ossec-*
– On “Time-field name” list select @timestamp option.
– Click on “Create” button.
– You should see the fields list with about ~72 fields.
– Go to “Discover” tap on top bar buttons.

– Click at top bar on “Settings”.
– Click on “Objects”.
– Then click the button “Import”
– Select the file ~/ossec_tmp/ossec-wazuh/extensions/kibana/kibana-ossecwazuh-dashboards.json
– Optional: You can download the Dashboards JSON File directly from the repository `here`_.
[Refresh the Kibana page and you should be able to load your imported Dashboards.]

su
mkdir -p /var/ossec/update/ruleset && cd /var/ossec/update/ruleset
wget https://raw.githubusercontent.com/wazuh/ossec-rules/stable/ossec_ruleset.py
chmod +x /var/ossec/update/ruleset/ossec_ruleset.py
/var/ossec/update/ruleset/ossec_ruleset.py –help

Update ruleset
./var/ossec/update/ruleset/ossec_ruleset.py/ossec_ruleset.py -a
This can be cronned in the future.

crontab -e (as root)
[insert]
@weekly root cd /var/ossec/update/ruleset && ./ossec_ruleset.py -s
[save]

NGINX Proxy

sudo apt-get update
sudo apt-get install nginx apache2-utils

sudo rm /etc/nginx/sites-available/default
sudo nano /etc/nginx/sites-available/default
[insert]
server {
listen 80 default_server; #Listen on IPv4
listen [::]:80; #Listen on IPv6
return 301 https://$host$request_uri;
}

server {
listen *:443;
listen [::]:443;
ssl on;
ssl_certificate /etc/pki/tls/certs/kibana-access.crt;
ssl_certificate_key /etc/pki/tls/private/kibana-access.key;
server_name “Server Name”;
access_log /var/log/nginx/kibana.access.log;
error_log /var/log/nginx/kibana.error.log;

location / {
auth_basic “Restricted”;
auth_basic_user_file /etc/nginx/conf.d/kibana.htpasswd;
proxy_pass http://127.0.0.1:5601;
}
}
[save]

cd ~
[document your passwords for the next part securely!!!]]
sudo openssl genrsa -des3 -out server.key 1024
sudo openssl req -new -key server.key -out server.csr

sudo cp server.key server.key.org
sudo openssl rsa -in server.key.org -out kibana-access.key
sudo openssl x509 -req -days 365 -in server.csr -signkey server.key -out kibana-access.crt
sudo mkdir -p /etc/pki/tls/certs
sudo cp kibana-access.crt /etc/pki/tls/certs/
sudo mkdir -p /etc/pki/tls/private/
sudo cp kibana-access.key /etc/pki/tls/private/

sudo htpasswd -c /etc/nginx/conf.d/kibana.htpasswd kibanaadmin [note] make up your own kibanaadmin username if you want to]

sudo nano /opt/kibana/config/kibana.yml
[edit and uncomment]
server.host: “127.0.0.1”
[save]

sudo service kibana start
sudo service nginx restart

browse: https://your_host_or_ip:443

After a reboot logstash has to be started manualy, I did not spent much time on the issue since I hardly ever need to reboot. I will update when I solve it, meanwhile any tips are very welcome in the comments …
Item added.
########################################
Drop me a note if you find an error, thank me not 🙂

@sucurisecurity killed my wordpress site (twice)

w00t w00t! mod_fcgid: stderr: PHP Fatal error: Cannot redeclare class SucuriScanSiteCheck

Yesterday Sucuri for WordPress was updated to 1.7.18, my website immediatly went offline. I got notified about it by Jetpack.
Problem: I cannot deactivate the Sucuri plugin via the WordPress dash since the site is down and I can’t login 🙂
Solution: Logon via FTP and rename the Sucuri plugin directory to .old. My site went back up immediatly.

I tried to revert to my latest backup, which worked, and Sucuri got automaticly updated to version 1.7.19. My website went down again .. and I could start all over again, disabling the plugin over FTP

For now I leave the plugin as is. Hopefully Sucuri comes with a solution soon. I’ll wait ..

Update: after posting this to Twitter Sucuri immediately contacted me and offered to provide a solution. As we speak we are working together to get this solved. Thumbs up for a company like that.

Update2: it’s working again, but the source of the troubles is yet to be determined.

EDIT: Prune backups made with Relax and Recover #rear #linux

People keep asking me how I make incremental backups with rear, so I edited my posting about pruning backups with rear and inserted my config.

Prune backups …

If you don’t want to look that up you find my /etc/rear/local.conf below.

BACKUP=NETFS
OUTPUT=ISO
CDROM_SIZE=20
BACKUP_URL=nfs://xxx.xxx.xxx.xxx/volume2/LinuxDR/rear
ISO_DIR=/mnt/ISO
ISO_PREFIX=”rear-$HOSTNAME”
BACKUP_PROG_EXCLUDE=( ‘/tmp/*’ ‘/dev/shm/*’ ‘/mnt/*’ ‘/media/*’ $VAR_DIR/output/\* )
BACKUP_SELINUX_DISABLE=1
BACKUP_TYPE=incremental
FULLBACKUPDAY=Fri

I also rsync my backups to an external (Raspberry Pi 1B) server, will post about that later on.

Have fun backing up!

Why would one use #fail2ban ..

This is why 🙂

Chain INPUT (policy ACCEPT)
target prot opt source destination
fail2ban-postfix-sasl tcp — anywhere anywhere multiport dports smtp
fail2ban-dovecot-pop3imap tcp — anywhere anywhere multiport dports pop3,pop3s,imap2,imaps
fail2ban-pureftpd tcp — anywhere anywhere multiport dports ftp
fail2ban-ssh tcp — anywhere anywhere multiport dports ssh

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination

Chain fail2ban-dovecot-pop3imap (1 references)
target prot opt source destination
RETURN all — anywhere anywhere

Chain fail2ban-postfix-sasl (1 references)
target prot opt source destination
REJECT all — 157.122.148.247 anywhere reject-with icmp-port-unreachable
REJECT all — 193.189.117.162 anywhere reject-with icmp-port-unreachable
REJECT all — 193.189.117.151 anywhere reject-with icmp-port-unreachable
REJECT all — aus1345893.lnk.telstra.net anywhere reject-with icmp-port-unreachable
REJECT all — 187-54-83-70.gnace704.e.brasiltelecom.net.br anywhere reject-with icmp-port-unreachable
REJECT all — 41.65.158.124 anywhere reject-with icmp-port-unreachable
REJECT all — 185.125.4.196 anywhere reject-with icmp-port-unreachable
RETURN all — anywhere anywhere

Chain fail2ban-pureftpd (1 references)
target prot opt source destination
REJECT all — localhost anywhere reject-with icmp-port-unreachable
REJECT all — 46.105.49.146 anywhere reject-with icmp-port-unreachable
REJECT all — 221.5.49.36 anywhere reject-with icmp-port-unreachable
RETURN all — anywhere anywhere

Chain fail2ban-ssh (1 references)
target prot opt source destination
REJECT all — 119.164.254.50 anywhere reject-with icmp-port-unreachable
REJECT all — 113.12.184.238 anywhere reject-with icmp-port-unreachable
REJECT all — customer-187-157-170-18-sta.uninet-ide.com.mx anywhere reject-with icmp-port-unreachable
REJECT all — 171.234.8.73 anywhere reject-with icmp-port-unreachable
REJECT all — 121.15.13.237 anywhere reject-with icmp-port-unreachable
REJECT all — 117.3.103.61 anywhere reject-with icmp-port-unreachable
REJECT all — 103.4.231.200 anywhere reject-with icmp-port-unreachable
REJECT all — 91.224.160.10 anywhere reject-with icmp-port-unreachable
REJECT all — host-156.196.1.35-static.tedata.net anywhere reject-with icmp-port-unreachable
REJECT all — 222.76.215.239 anywhere reject-with icmp-port-unreachable
REJECT all — 27.72.65.228 anywhere reject-with icmp-port-unreachable
REJECT all — 116.31.116.10 anywhere reject-with icmp-port-unreachable
REJECT all — 123.238.175.59.broad.wh.hb.dynamic.163data.com.cn anywhere reject-with icmp-port-unreachable
REJECT all — 104.148.116.66 anywhere reject-with icmp-port-unreachable
REJECT all — 124.158.5.115 anywhere reject-with icmp-port-unreachable
REJECT all — ip-198.12-150-220.ip.secureserver.net anywhere reject-with icmp-port-unreachable
REJECT all — ubuntu-dev.sofia-connect.net anywhere reject-with icmp-port-unreachable
REJECT all — 13.84.218.172 anywhere reject-with icmp-port-unreachable
REJECT all — 45.119.154.176 anywhere reject-with icmp-port-unreachable
REJECT all — static.vnpt.vn anywhere reject-with icmp-port-unreachable
REJECT all — ec2-52-37-48-252.us-west-2.compute.amazonaws.com anywhere reject-with icmp-port-unreachable
REJECT all — 103.207.39.18 anywhere reject-with icmp-port-unreachable
REJECT all — 185.110.132.201 anywhere reject-with icmp-port-unreachable
REJECT all — oisin.gainpromotion.net anywhere reject-with icmp-port-unreachable
REJECT all — 222.186.56.14 anywhere reject-with icmp-port-unreachable
REJECT all — vps07.snthostings.com anywhere reject-with icmp-port-unreachable
REJECT all — 79.96.151.203.sta.inet.co.th anywhere reject-with icmp-port-unreachable
REJECT all — 222.186.21.224 anywhere reject-with icmp-port-unreachable
REJECT all — static-49-107-226-77.ipcom.comunitel.net anywhere reject-with icmp-port-unreachable
REJECT all — dynamic.vdc.vn anywhere reject-with icmp-port-unreachable
REJECT all — ruslango94.zomro.com anywhere reject-with icmp-port-unreachable
REJECT all — host149-173-static.9-188-b.business.telecomitalia.it anywhere reject-with icmp-port-unreachable
REJECT all — wim-luche9.fastnet.it anywhere reject-with icmp-port-unreachable
REJECT all — ec2-52-26-61-71.us-west-2.compute.amazonaws.com anywhere reject-with icmp-port-unreachable
REJECT all — static.vdc.vn anywhere reject-with icmp-port-unreachable
RETURN all — anywhere anywhere

Monitoring cafeïne levels with @zabbix

Can somebody bring me some coffee? Please?

image

🙂

Driving to work, at 5472 km/h …

Have fun watching ….

HOWTO: Prune backups made with Relax and Recover #rear #linux

I am backing up my Linux machines bare-metal with ReaR, in the form of weekly full backups and incrementals. ReaR does a great job but doesn’t cleanup old backups by itself, so if you do nothing you’re filing up your backup server sooner or later.

As I am lazy Linux sysadmin I don’t want to cleanup leftovers manualy, so I automated it 🙂 Here is how:

First: my /etc/rear/local.conf

BACKUP=NETFS
OUTPUT=ISO
CDROM_SIZE=20
BACKUP_URL=nfs://xxx.xxx.xxx.xxx/volume2/LinuxDR/rear
ISO_DIR=/mnt/ISO
ISO_PREFIX=”rear-$HOSTNAME”
BACKUP_PROG_EXCLUDE=( ‘/tmp/*’ ‘/dev/shm/*’ ‘/mnt/*’ ‘/media/*’ $VAR_DIR/output/\* )
BACKUP_SELINUX_DISABLE=1
BACKUP_TYPE=incremental
FULLBACKUPDAY=Fri
NETFS_PREFIX=”$HOSTNAME”root@lamp02:~#

This will create a full backup every friday, and incrementals on all other days.

In the picture below you see what happens if you let things run it’s course:

files_lamp02

I have got 11 backup archives of which I only need 4 (today) to recover when needed. The archives with an “F” in their name are full backups, the other ones (with “I”) are incrementals. There are also two important files here, basebackup.txt and timestamp.txt. Those two files actually do sort of the same: tell the system when the last full backup was made. ReaR needs this for restoring the system, using the correct files. I only need timestamp.txt for my cleanup job, but I could also use basebackup.txt for them. What’s in those files is not important to me, I use the time and date they where created or modified. Today is march 17, so every archive created before the last full backup (march 13 in this case) may be deleted.

To find out what files can be deleted you can issue the following command in the terminal:

find /media/nfs/lamp02/*tar.gz -type f ! -newer /media/nfs/lamp02/timestamp.txt

(make sure to adjust the path to your own situation!)

Output:

term_lamp02

To delete them issue the following command:

find /media/nfs/lamp02/*tar.gz -type f ! -newer /media/nfs/lamp02/timestamp.txt | xargs rm -f

(Again, adjust your paths!)

All files created before the last full backup will be gone, keeping your backup server clean 🙂
The only thing you have to do now is create a proper cronjob to automate this. Be sure the command runs AFTER the backups are complete!

For the best results you can do this daily. If you feel you want to keep your files longer, maybe a month, you can tweak around to accomplish that. Maybe I will update this posting with my own solution for that, although in my case I do not need it.

Happy Linux’ing!

(Edit: this posting is now in the official rear user documentation, which is kinda cool)
(Edit 2: people keep asking me how to make incremental backups with ReaR, so I inserted my /etc/rear/local.conf above)