Category Archive: Debian

HOWTO: Raspberry Pi with 3.5 inch TFT Waveshare clone.

This has been tested on Raspbian Jessie.

I recently build a kind of Raspberry Pi Rig, with the Pi 1B, 2B and 3B. To finish things I wanted to add some displays to it so I am not looking at a blind panel.

Luckily you can find all sorts of displays at E-bay so I bought one cheap-ass touch TFT display for around 11 Euro’s.

After playing around with the display I found out it’s a cheap waveshare clone. As expected, there came no documentation with the display, so basically you’re on your own.

I’ve searched the internet and found the things I needed to get things running. So after making a backup of my Pi3 SD card I installed the display and got to work.
After powering up th Pi the screen is white, with nothing else to see (backlight only).

Since I am running Raspbian lite, without X I won’t be doing anything with the touch features of the screen, I am on CLI only mode.e
Logon to your pi and enter the following commands (you don’t need sudo for the display installation)

sudo apt-get update && sudo apt-get upgrade -y
wget http://www.waveshare.com/w/upload/7/74/LCD-show-170309.tar.gz
tar xvf LCD-show-*.tar.gz
cd LCD-show/
chmod +x LCD35-show
./LCD35-show

Some packages must be installed, answer with yes (y)

The pi will reboot and the display comes to life!

It is very well possible that your screen is up side down. If that is the case you need to find a solution yourself since I didn’t find any, and I don’t care about it since I just rotated the whole rig 🙂

After installing the display it would be nice if it shows something more then just a login prompt. Htop would be nice, for a start.

This is how to do it:

sudo apt-get install htop
sudo raspi-config (boot options > console autologin)

Don’t reboot yet, do:

nano .bashrc

At the end of the file add:

if [ $(tty) == /dev/tty1 ]; then
/usr/bin/htop
fi

crtl-x to save the file.

To keep the screen on, do:

sudo nano /etc/kbd/config

Set the folowing:
BLANK_TIME=0

Save the file

sudo nano /boot/cmdline.txt
add
set consoleblank=0
to the single line

Comment (add a # in front) the line that starts with dtoverlay=ads7846,cs=1,penirq=17,penirq_pull=2,speed=1000000,keep_vref_on=1,….

Save the file.
Reboot:

sudo shutdown -r now

Sit back and enjoy your Raspberry Pi display showing HTOP 🙂

If you want to watch pi-hole stats, replace /usr/bin/htop with /etc/.pihole/pihole -c in .bashrc

A great example of a “slow” brute force attack #ossec

The last couple of days a lot of malicious servers got caught by my Ossec HIDS/IPS and have been send to my iptables jail. However, I’ve been seeing one host (185.93.185.239) evading my traps for days. It has been nocking on my door in a slow pace, slow enough not to trigger a brute force detection (causing six events in a small period of time).

So I changed the brute force detection window to 86400 seconds, to see if that helps.

The result:

image

He got caught and went to jail 🙂

iptables -L

image

HOWTO: Install OSSEC / WAZUH hids #security #ossec #wazuh

The last few nights I have been working on this project. At first it seemed easy, but a lot of information on the internet is either too old, not well maintained, incomplete, or whatever. Needless to say that these kind of projects are complex, and the IT environment changes fast, so I cannot point any fingers at anyone, and I won’t. By the time YOU find this blog because you run into the same issues, this posting might be outdated as well 🙂

After succesfully testing this at home I implemented it at work in under an hour 🙂

image

For this project I started at http://wazuh-documentation.readthedocs.io/en/latest/ and as usual I ended looking all over the internet. But I took notes, maybe it’ll help you out. Have fun.
(I prepared this posting yesterday and scheduled it for automatic posting at 15:30, dunno if that works)

Installation manual single-host Wazuh HIDS on Debian Jessie 8 (16Gb ram, 40Gb hdd), including ELK Stack (Kibana interface on secured proxy)
Please realize, these are my notes, I didn’t clean it up.

base debian jessie server install with standard system utilities
[already installed]

After install
logon as root
apt-get update
sudo apt-get install openssh-server openssh-client mc curl sudo gcc make git libssl-dev apt-transport-https
su
adduser wazuh sudo
visudo

[insert] wazuh ALL=(ALL:ALL) ALL [after line with %sudo]
su wazuh

cd ~
mkdir ossec_tmp
cd ossec_tmp
git clone -b stable https://github.com/wazuh/ossec-wazuh.git
cd ossec-wazuh
sudo ./install.sh

sudo /var/ossec/bin/ossec-control start

[Skip the agent part for now. http://wazuh-documentation.readthedocs.io/en/latest/first_steps.html]

sudo nano /etc/apt/sources.list.d/java-8-debian.list
[insert]
deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main
deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main
[save]

sudo apt-key adv –keyserver keyserver.ubuntu.com –recv-keys EEA14886
sudo apt-get update
sudo apt-get install oracle-java8-installer
wget -qO – https://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add –
echo “deb https://packages.elasticsearch.org/logstash/2.1/debian stable main” | sudo tee -a /etc/apt/sources.list
sudo apt-get update && sudo apt-get install logstash

[skip the forwarder, I don’t need it since the whole HIDS runs on a single server]

sudo cp ~/ossec_tmp/ossec-wazuh/extensions/logstash/01-ossec-singlehost.conf /etc/logstash/conf.d/
sudo cp ~/ossec_tmp/ossec-wazuh/extensions/elasticsearch/elastic-ossec-template.json /etc/logstash/

sudo curl -O “http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz”
sudo gzip -d GeoLiteCity.dat.gz && sudo mv GeoLiteCity.dat /etc/logstash/
sudo usermod -a -G ossec logstash

wget -qO – https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add –
echo “deb https://packages.elastic.co/elasticsearch/2.x/debian stable main” | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list
sudo apt-get update && sudo apt-get install elasticsearch
sudo update-rc.d elasticsearch defaults 95 10

sudo nano /etc/elasticsearch/elasticsearch.yml
[find the following variables, uncomment them, and rename them as you wish]
cluster.name: ossec
node.name: ossec_node1
uncomment bootstrap.mlockall: true
[save]

sudo nano /etc/security/limits.conf
[insert at the end]
elasticsearch – nofile 65535
elasticsearch – memlock unlimited
[save]

sudo nano /etc/default/elasticsearch
[edit and uncomment]
# ES_HEAP_SIZE – Set it to half your system RAM memory
ES_HEAP_SIZE=8g
MAX_LOCKED_MEMORY=unlimited
MAX_OPEN_FILES=65535
[save]

sudo nano /usr/lib/systemd/system/elasticsearch.service
[uncomment]
LimitMEMLOCK=infinity
[save]

sudo service elasticsearch start
sudo systemctl enable elasticsearch

cd ~/ossec_tmp/ossec-wazuh/extensions/elasticsearch/ && curl -XPUT “http://localhost:9200/_template/ossec/” -d “@elastic-ossec-template.json”
sudo update-rc.d logstash defaults 95 10
sudo service logstash start

wget -qO – https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add –
echo “deb http://packages.elastic.co/kibana/4.5/debian stable main” | sudo tee -a /etc/apt/sources.list
sudo apt-get update && sudo apt-get install kibana

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service

sudo service kibana start

In your browser

http://IP_OF_HOST_OR_HOSTNAME:5601/app/kibana

[wait for the yellow light, it will turn green, if not refresh after 2 minutes]
goto http://IP_OF_HOST_OR_HOSTNAME:5601
[configure]
– Check “Index contains time-based events”.
– Insert Index name or pattern: ossec-*
– On “Time-field name” list select @timestamp option.
– Click on “Create” button.
– You should see the fields list with about ~72 fields.
– Go to “Discover” tap on top bar buttons.

– Click at top bar on “Settings”.
– Click on “Objects”.
– Then click the button “Import”
– Select the file ~/ossec_tmp/ossec-wazuh/extensions/kibana/kibana-ossecwazuh-dashboards.json
– Optional: You can download the Dashboards JSON File directly from the repository `here`_.
[Refresh the Kibana page and you should be able to load your imported Dashboards.]

su
mkdir -p /var/ossec/update/ruleset && cd /var/ossec/update/ruleset
wget https://raw.githubusercontent.com/wazuh/ossec-rules/stable/ossec_ruleset.py
chmod +x /var/ossec/update/ruleset/ossec_ruleset.py
/var/ossec/update/ruleset/ossec_ruleset.py –help

Update ruleset
./var/ossec/update/ruleset/ossec_ruleset.py/ossec_ruleset.py -a
This can be cronned in the future.

crontab -e (as root)
[insert]
@weekly root cd /var/ossec/update/ruleset && ./ossec_ruleset.py -s
[save]

NGINX Proxy

sudo apt-get update
sudo apt-get install nginx apache2-utils

sudo rm /etc/nginx/sites-available/default
sudo nano /etc/nginx/sites-available/default
[insert]
server {
listen 80 default_server; #Listen on IPv4
listen [::]:80; #Listen on IPv6
return 301 https://$host$request_uri;
}

server {
listen *:443;
listen [::]:443;
ssl on;
ssl_certificate /etc/pki/tls/certs/kibana-access.crt;
ssl_certificate_key /etc/pki/tls/private/kibana-access.key;
server_name “Server Name”;
access_log /var/log/nginx/kibana.access.log;
error_log /var/log/nginx/kibana.error.log;

location / {
auth_basic “Restricted”;
auth_basic_user_file /etc/nginx/conf.d/kibana.htpasswd;
proxy_pass http://127.0.0.1:5601;
}
}
[save]

cd ~
[document your passwords for the next part securely!!!]]
sudo openssl genrsa -des3 -out server.key 1024
sudo openssl req -new -key server.key -out server.csr

sudo cp server.key server.key.org
sudo openssl rsa -in server.key.org -out kibana-access.key
sudo openssl x509 -req -days 365 -in server.csr -signkey server.key -out kibana-access.crt
sudo mkdir -p /etc/pki/tls/certs
sudo cp kibana-access.crt /etc/pki/tls/certs/
sudo mkdir -p /etc/pki/tls/private/
sudo cp kibana-access.key /etc/pki/tls/private/

sudo htpasswd -c /etc/nginx/conf.d/kibana.htpasswd kibanaadmin [note] make up your own kibanaadmin username if you want to]

sudo nano /opt/kibana/config/kibana.yml
[edit and uncomment]
server.host: “127.0.0.1”
[save]

sudo service kibana start
sudo service nginx restart

browse: https://your_host_or_ip:443

After a reboot logstash has to be started manualy, I did not spent much time on the issue since I hardly ever need to reboot. I will update when I solve it, meanwhile any tips are very welcome in the comments …
Item added.
########################################
Drop me a note if you find an error, thank me not 🙂

@sucurisecurity killed my wordpress site (twice)

w00t w00t! mod_fcgid: stderr: PHP Fatal error: Cannot redeclare class SucuriScanSiteCheck

Yesterday Sucuri for WordPress was updated to 1.7.18, my website immediatly went offline. I got notified about it by Jetpack.
Problem: I cannot deactivate the Sucuri plugin via the WordPress dash since the site is down and I can’t login 🙂
Solution: Logon via FTP and rename the Sucuri plugin directory to .old. My site went back up immediatly.

I tried to revert to my latest backup, which worked, and Sucuri got automaticly updated to version 1.7.19. My website went down again .. and I could start all over again, disabling the plugin over FTP

For now I leave the plugin as is. Hopefully Sucuri comes with a solution soon. I’ll wait ..

Update: after posting this to Twitter Sucuri immediately contacted me and offered to provide a solution. As we speak we are working together to get this solved. Thumbs up for a company like that.

Update2: it’s working again, but the source of the troubles is yet to be determined.

EDIT: Prune backups made with Relax and Recover #rear #linux

People keep asking me how I make incremental backups with rear, so I edited my posting about pruning backups with rear and inserted my config.

Prune backups …

If you don’t want to look that up you find my /etc/rear/local.conf below.

BACKUP=NETFS
OUTPUT=ISO
CDROM_SIZE=20
BACKUP_URL=nfs://xxx.xxx.xxx.xxx/volume2/LinuxDR/rear
ISO_DIR=/mnt/ISO
ISO_PREFIX=”rear-$HOSTNAME”
BACKUP_PROG_EXCLUDE=( ‘/tmp/*’ ‘/dev/shm/*’ ‘/mnt/*’ ‘/media/*’ $VAR_DIR/output/\* )
BACKUP_SELINUX_DISABLE=1
BACKUP_TYPE=incremental
FULLBACKUPDAY=Fri

I also rsync my backups to an external (Raspberry Pi 1B) server, will post about that later on.

Have fun backing up!

Why would one use #fail2ban ..

This is why 🙂

Chain INPUT (policy ACCEPT)
target prot opt source destination
fail2ban-postfix-sasl tcp — anywhere anywhere multiport dports smtp
fail2ban-dovecot-pop3imap tcp — anywhere anywhere multiport dports pop3,pop3s,imap2,imaps
fail2ban-pureftpd tcp — anywhere anywhere multiport dports ftp
fail2ban-ssh tcp — anywhere anywhere multiport dports ssh

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination

Chain fail2ban-dovecot-pop3imap (1 references)
target prot opt source destination
RETURN all — anywhere anywhere

Chain fail2ban-postfix-sasl (1 references)
target prot opt source destination
REJECT all — 157.122.148.247 anywhere reject-with icmp-port-unreachable
REJECT all — 193.189.117.162 anywhere reject-with icmp-port-unreachable
REJECT all — 193.189.117.151 anywhere reject-with icmp-port-unreachable
REJECT all — aus1345893.lnk.telstra.net anywhere reject-with icmp-port-unreachable
REJECT all — 187-54-83-70.gnace704.e.brasiltelecom.net.br anywhere reject-with icmp-port-unreachable
REJECT all — 41.65.158.124 anywhere reject-with icmp-port-unreachable
REJECT all — 185.125.4.196 anywhere reject-with icmp-port-unreachable
RETURN all — anywhere anywhere

Chain fail2ban-pureftpd (1 references)
target prot opt source destination
REJECT all — localhost anywhere reject-with icmp-port-unreachable
REJECT all — 46.105.49.146 anywhere reject-with icmp-port-unreachable
REJECT all — 221.5.49.36 anywhere reject-with icmp-port-unreachable
RETURN all — anywhere anywhere

Chain fail2ban-ssh (1 references)
target prot opt source destination
REJECT all — 119.164.254.50 anywhere reject-with icmp-port-unreachable
REJECT all — 113.12.184.238 anywhere reject-with icmp-port-unreachable
REJECT all — customer-187-157-170-18-sta.uninet-ide.com.mx anywhere reject-with icmp-port-unreachable
REJECT all — 171.234.8.73 anywhere reject-with icmp-port-unreachable
REJECT all — 121.15.13.237 anywhere reject-with icmp-port-unreachable
REJECT all — 117.3.103.61 anywhere reject-with icmp-port-unreachable
REJECT all — 103.4.231.200 anywhere reject-with icmp-port-unreachable
REJECT all — 91.224.160.10 anywhere reject-with icmp-port-unreachable
REJECT all — host-156.196.1.35-static.tedata.net anywhere reject-with icmp-port-unreachable
REJECT all — 222.76.215.239 anywhere reject-with icmp-port-unreachable
REJECT all — 27.72.65.228 anywhere reject-with icmp-port-unreachable
REJECT all — 116.31.116.10 anywhere reject-with icmp-port-unreachable
REJECT all — 123.238.175.59.broad.wh.hb.dynamic.163data.com.cn anywhere reject-with icmp-port-unreachable
REJECT all — 104.148.116.66 anywhere reject-with icmp-port-unreachable
REJECT all — 124.158.5.115 anywhere reject-with icmp-port-unreachable
REJECT all — ip-198.12-150-220.ip.secureserver.net anywhere reject-with icmp-port-unreachable
REJECT all — ubuntu-dev.sofia-connect.net anywhere reject-with icmp-port-unreachable
REJECT all — 13.84.218.172 anywhere reject-with icmp-port-unreachable
REJECT all — 45.119.154.176 anywhere reject-with icmp-port-unreachable
REJECT all — static.vnpt.vn anywhere reject-with icmp-port-unreachable
REJECT all — ec2-52-37-48-252.us-west-2.compute.amazonaws.com anywhere reject-with icmp-port-unreachable
REJECT all — 103.207.39.18 anywhere reject-with icmp-port-unreachable
REJECT all — 185.110.132.201 anywhere reject-with icmp-port-unreachable
REJECT all — oisin.gainpromotion.net anywhere reject-with icmp-port-unreachable
REJECT all — 222.186.56.14 anywhere reject-with icmp-port-unreachable
REJECT all — vps07.snthostings.com anywhere reject-with icmp-port-unreachable
REJECT all — 79.96.151.203.sta.inet.co.th anywhere reject-with icmp-port-unreachable
REJECT all — 222.186.21.224 anywhere reject-with icmp-port-unreachable
REJECT all — static-49-107-226-77.ipcom.comunitel.net anywhere reject-with icmp-port-unreachable
REJECT all — dynamic.vdc.vn anywhere reject-with icmp-port-unreachable
REJECT all — ruslango94.zomro.com anywhere reject-with icmp-port-unreachable
REJECT all — host149-173-static.9-188-b.business.telecomitalia.it anywhere reject-with icmp-port-unreachable
REJECT all — wim-luche9.fastnet.it anywhere reject-with icmp-port-unreachable
REJECT all — ec2-52-26-61-71.us-west-2.compute.amazonaws.com anywhere reject-with icmp-port-unreachable
REJECT all — static.vdc.vn anywhere reject-with icmp-port-unreachable
RETURN all — anywhere anywhere

Monitoring cafeïne levels with @zabbix

Can somebody bring me some coffee? Please?

image

🙂

HOWTO: Prune backups made with Relax and Recover #rear #linux

I am backing up my Linux machines bare-metal with ReaR, in the form of weekly full backups and incrementals. ReaR does a great job but doesn’t cleanup old backups by itself, so if you do nothing you’re filing up your backup server sooner or later.

As I am lazy Linux sysadmin I don’t want to cleanup leftovers manualy, so I automated it 🙂 Here is how:

First: my /etc/rear/local.conf

BACKUP=NETFS
OUTPUT=ISO
CDROM_SIZE=20
BACKUP_URL=nfs://xxx.xxx.xxx.xxx/volume2/LinuxDR/rear
ISO_DIR=/mnt/ISO
ISO_PREFIX=”rear-$HOSTNAME”
BACKUP_PROG_EXCLUDE=( ‘/tmp/*’ ‘/dev/shm/*’ ‘/mnt/*’ ‘/media/*’ $VAR_DIR/output/\* )
BACKUP_SELINUX_DISABLE=1
BACKUP_TYPE=incremental
FULLBACKUPDAY=Fri
NETFS_PREFIX=”$HOSTNAME”root@lamp02:~#

This will create a full backup every friday, and incrementals on all other days.

In the picture below you see what happens if you let things run it’s course:

files_lamp02

I have got 11 backup archives of which I only need 4 (today) to recover when needed. The archives with an “F” in their name are full backups, the other ones (with “I”) are incrementals. There are also two important files here, basebackup.txt and timestamp.txt. Those two files actually do sort of the same: tell the system when the last full backup was made. ReaR needs this for restoring the system, using the correct files. I only need timestamp.txt for my cleanup job, but I could also use basebackup.txt for them. What’s in those files is not important to me, I use the time and date they where created or modified. Today is march 17, so every archive created before the last full backup (march 13 in this case) may be deleted.

To find out what files can be deleted you can issue the following command in the terminal:

find /media/nfs/lamp02/*tar.gz -type f ! -newer /media/nfs/lamp02/timestamp.txt

(make sure to adjust the path to your own situation!)

Output:

term_lamp02

To delete them issue the following command:

find /media/nfs/lamp02/*tar.gz -type f ! -newer /media/nfs/lamp02/timestamp.txt | xargs rm -f

(Again, adjust your paths!)

All files created before the last full backup will be gone, keeping your backup server clean 🙂
The only thing you have to do now is create a proper cronjob to automate this. Be sure the command runs AFTER the backups are complete!

For the best results you can do this daily. If you feel you want to keep your files longer, maybe a month, you can tweak around to accomplish that. Maybe I will update this posting with my own solution for that, although in my case I do not need it.

Happy Linux’ing!

(Edit: this posting is now in the official rear user documentation, which is kinda cool)
(Edit 2: people keep asking me how to make incremental backups with ReaR, so I inserted my /etc/rear/local.conf above)

HOWTO: Fix LOCALE errors in Debian

I am installing Debian a lot, and with every fresh install comes this:

perl: warning: Falling back to the standard locale ("C").
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory

Someone should fix that 🙂

Anyway:

sudo nano /etc/environment

and add the following:

LC_ALL="en_US.utf8"

Or any locale you wan to use. Reboot, and all is fine again.

HOWTO: #Ispconfig3 #Postfix #Greylisting

No-to-Spam
Just a little manual 🙂
Assuming that you have Ispconfig3 with postfix installed, it’s very easy to get rid of spam that passes your filters, despite the fact that Ispconfig has a anti-spam engine onboard.

I am on Debian 8 (Jessie) b.t.w.

– Login as root and

apt-get update && apt-get install postgrey

***
#Optional:
Postgrey add’s a delay on your maildelivery, but only for hosts that are new to your server. The default is 300 seconds, but you can safely shorten that to 60 or less because the spamming server might never retry because it won’t get a the greylist command 🙂
Please note that the number “10023” is the port on which postrey runs, it may differ on your installation. Keep that number in mind because you need it in a minute.

nano /etc/default/postgrey

add –delay=60 to this line: POSTGREY_OPTS=”–inet=127.0.0.1:10023″ It will look like this: POSTGREY_OPTS=”–inet=127.0.0.1:10023 –delay=60″

service postgrey start

***

nano /etc/postfix/main.cf

and add check_policy_service inet:127.0.0.1:10023 to the smtpd_recipient_restrictions.
Mine looks like this:
smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination, check_recipient_access mysql:/etc/postfix/mysql-virtual_recipient.cf,check_policy_service inet:127.0.0.1:10023

postfix reload

Done

You might want to see it working, you can do so by issuing the following command:

tail -f /var/log/mail.log | grep greylist

Output looks like this:

root@server /var/log # tail -f mail.log | grep greylist
Dec 21 12:30:31 server postgrey[27672]: action=greylist, reason=new, client_name=unknown, client_address=122.191.145.8, recipient=[redacted]

Enjoy, spam is something from the past.