Category Archive: Tech

#Moodle installation on #ISPConfig 3

(Maybe I should have read the ISPConfig manual, but reading manuals is something I almost never do)

W00t! What a drama 🙂

I’ve been on this puzzle for hours, maybe like you too, since you probably found this posting while searching for your own problems with open_basedir restrictions on “moodledata”.
To keep things short, here is the solution:

Change the path for moodledata from /var/www/clients/clientx/webx/moodledata to /var/www/clients/clientx/webx/private/moodledata (only insert /private, don’t change anything else)

Have fun!

HOWTO: Raspberry Pi with 3.5 inch TFT Waveshare clone.

This has been tested on Raspbian Jessie.

I recently build a kind of Raspberry Pi Rig, with the Pi 1B, 2B and 3B. To finish things I wanted to add some displays to it so I am not looking at a blind panel.

Luckily you can find all sorts of displays at E-bay so I bought one cheap-ass touch TFT display for around 11 Euro’s.

After playing around with the display I found out it’s a cheap waveshare clone. As expected, there came no documentation with the display, so basically you’re on your own.

I’ve searched the internet and found the things I needed to get things running. So after making a backup of my Pi3 SD card I installed the display and got to work.
After powering up th Pi the screen is white, with nothing else to see (backlight only).

Since I am running Raspbian lite, without X I won’t be doing anything with the touch features of the screen, I am on CLI only mode.e
Logon to your pi and enter the following commands (you don’t need sudo for the display installation)

sudo apt-get update && sudo apt-get upgrade -y
wget http://www.waveshare.com/w/upload/7/74/LCD-show-170309.tar.gz
tar xvf LCD-show-*.tar.gz
cd LCD-show/
chmod +x LCD35-show
./LCD35-show

Some packages must be installed, answer with yes (y)

The pi will reboot and the display comes to life!

It is very well possible that your screen is up side down. If that is the case you need to find a solution yourself since I didn’t find any, and I don’t care about it since I just rotated the whole rig 🙂

After installing the display it would be nice if it shows something more then just a login prompt. Htop would be nice, for a start.

This is how to do it:

sudo apt-get install htop
sudo raspi-config (boot options > console autologin)

Don’t reboot yet, do:

nano .bashrc

At the end of the file add:

if [ $(tty) == /dev/tty1 ]; then
/usr/bin/htop
fi

crtl-x to save the file.

To keep the screen on, do:

sudo nano /etc/kbd/config

Set the folowing:
BLANK_TIME=0

Save the file

sudo nano /boot/cmdline.txt
add set consoleblank=0 to the single line

Save the file.
Reboot:

sudo shutdown -r now

Sit back and enjoy your Raspberry Pi display showing HTOP 🙂

HOWTO: Install OSSEC / WAZUH hids #security #ossec #wazuh

The last few nights I have been working on this project. At first it seemed easy, but a lot of information on the internet is either too old, not well maintained, incomplete, or whatever. Needless to say that these kind of projects are complex, and the IT environment changes fast, so I cannot point any fingers at anyone, and I won’t. By the time YOU find this blog because you run into the same issues, this posting might be outdated as well 🙂

After succesfully testing this at home I implemented it at work in under an hour 🙂

image

For this project I started at http://wazuh-documentation.readthedocs.io/en/latest/ and as usual I ended looking all over the internet. But I took notes, maybe it’ll help you out. Have fun.
(I prepared this posting yesterday and scheduled it for automatic posting at 15:30, dunno if that works)

Installation manual single-host Wazuh HIDS on Debian Jessie 8 (16Gb ram, 40Gb hdd), including ELK Stack (Kibana interface on secured proxy)
Please realize, these are my notes, I didn’t clean it up.

base debian jessie server install with standard system utilities
[already installed]

After install
logon as root
apt-get update
sudo apt-get install openssh-server openssh-client mc curl sudo gcc make git libssl-dev apt-transport-https
su
adduser wazuh sudo
visudo

[insert] wazuh ALL=(ALL:ALL) ALL [after line with %sudo]
su wazuh

cd ~
mkdir ossec_tmp
cd ossec_tmp
git clone -b stable https://github.com/wazuh/ossec-wazuh.git
cd ossec-wazuh
sudo ./install.sh

sudo /var/ossec/bin/ossec-control start

[Skip the agent part for now. http://wazuh-documentation.readthedocs.io/en/latest/first_steps.html]

sudo nano /etc/apt/sources.list.d/java-8-debian.list
[insert]
deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main
deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main
[save]

sudo apt-key adv –keyserver keyserver.ubuntu.com –recv-keys EEA14886
sudo apt-get update
sudo apt-get install oracle-java8-installer
wget -qO – https://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add –
echo “deb https://packages.elasticsearch.org/logstash/2.1/debian stable main” | sudo tee -a /etc/apt/sources.list
sudo apt-get update && sudo apt-get install logstash

[skip the forwarder, I don’t need it since the whole HIDS runs on a single server]

sudo cp ~/ossec_tmp/ossec-wazuh/extensions/logstash/01-ossec-singlehost.conf /etc/logstash/conf.d/
sudo cp ~/ossec_tmp/ossec-wazuh/extensions/elasticsearch/elastic-ossec-template.json /etc/logstash/

sudo curl -O “http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz”
sudo gzip -d GeoLiteCity.dat.gz && sudo mv GeoLiteCity.dat /etc/logstash/
sudo usermod -a -G ossec logstash

wget -qO – https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add –
echo “deb https://packages.elastic.co/elasticsearch/2.x/debian stable main” | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list
sudo apt-get update && sudo apt-get install elasticsearch
sudo update-rc.d elasticsearch defaults 95 10

sudo nano /etc/elasticsearch/elasticsearch.yml
[find the following variables, uncomment them, and rename them as you wish]
cluster.name: ossec
node.name: ossec_node1
uncomment bootstrap.mlockall: true
[save]

sudo nano /etc/security/limits.conf
[insert at the end]
elasticsearch – nofile 65535
elasticsearch – memlock unlimited
[save]

sudo nano /etc/default/elasticsearch
[edit and uncomment]
# ES_HEAP_SIZE – Set it to half your system RAM memory
ES_HEAP_SIZE=8g
MAX_LOCKED_MEMORY=unlimited
MAX_OPEN_FILES=65535
[save]

sudo nano /usr/lib/systemd/system/elasticsearch.service
[uncomment]
LimitMEMLOCK=infinity
[save]

sudo service elasticsearch start
sudo systemctl enable elasticsearch

cd ~/ossec_tmp/ossec-wazuh/extensions/elasticsearch/ && curl -XPUT “http://localhost:9200/_template/ossec/” -d “@elastic-ossec-template.json”
sudo update-rc.d logstash defaults 95 10
sudo service logstash start

wget -qO – https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add –
echo “deb http://packages.elastic.co/kibana/4.5/debian stable main” | sudo tee -a /etc/apt/sources.list
sudo apt-get update && sudo apt-get install kibana

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service

sudo service kibana start

In your browser

http://IP_OF_HOST_OR_HOSTNAME:5601/app/kibana

[wait for the yellow light, it will turn green, if not refresh after 2 minutes]
goto http://IP_OF_HOST_OR_HOSTNAME:5601
[configure]
– Check “Index contains time-based events”.
– Insert Index name or pattern: ossec-*
– On “Time-field name” list select @timestamp option.
– Click on “Create” button.
– You should see the fields list with about ~72 fields.
– Go to “Discover” tap on top bar buttons.

– Click at top bar on “Settings”.
– Click on “Objects”.
– Then click the button “Import”
– Select the file ~/ossec_tmp/ossec-wazuh/extensions/kibana/kibana-ossecwazuh-dashboards.json
– Optional: You can download the Dashboards JSON File directly from the repository `here`_.
[Refresh the Kibana page and you should be able to load your imported Dashboards.]

su
mkdir -p /var/ossec/update/ruleset && cd /var/ossec/update/ruleset
wget https://raw.githubusercontent.com/wazuh/ossec-rules/stable/ossec_ruleset.py
chmod +x /var/ossec/update/ruleset/ossec_ruleset.py
/var/ossec/update/ruleset/ossec_ruleset.py –help

Update ruleset
./var/ossec/update/ruleset/ossec_ruleset.py/ossec_ruleset.py -a
This can be cronned in the future.

crontab -e (as root)
[insert]
@weekly root cd /var/ossec/update/ruleset && ./ossec_ruleset.py -s
[save]

NGINX Proxy

sudo apt-get update
sudo apt-get install nginx apache2-utils

sudo rm /etc/nginx/sites-available/default
sudo nano /etc/nginx/sites-available/default
[insert]
server {
listen 80 default_server; #Listen on IPv4
listen [::]:80; #Listen on IPv6
return 301 https://$host$request_uri;
}

server {
listen *:443;
listen [::]:443;
ssl on;
ssl_certificate /etc/pki/tls/certs/kibana-access.crt;
ssl_certificate_key /etc/pki/tls/private/kibana-access.key;
server_name “Server Name”;
access_log /var/log/nginx/kibana.access.log;
error_log /var/log/nginx/kibana.error.log;

location / {
auth_basic “Restricted”;
auth_basic_user_file /etc/nginx/conf.d/kibana.htpasswd;
proxy_pass http://127.0.0.1:5601;
}
}
[save]

cd ~
[document your passwords for the next part securely!!!]]
sudo openssl genrsa -des3 -out server.key 1024
sudo openssl req -new -key server.key -out server.csr

sudo cp server.key server.key.org
sudo openssl rsa -in server.key.org -out kibana-access.key
sudo openssl x509 -req -days 365 -in server.csr -signkey server.key -out kibana-access.crt
sudo mkdir -p /etc/pki/tls/certs
sudo cp kibana-access.crt /etc/pki/tls/certs/
sudo mkdir -p /etc/pki/tls/private/
sudo cp kibana-access.key /etc/pki/tls/private/

sudo htpasswd -c /etc/nginx/conf.d/kibana.htpasswd kibanaadmin [note] make up your own kibanaadmin username if you want to]

sudo nano /opt/kibana/config/kibana.yml
[edit and uncomment]
server.host: “127.0.0.1”
[save]

sudo service kibana start
sudo service nginx restart

browse: https://your_host_or_ip:443

After a reboot logstash has to be started manualy, I did not spent much time on the issue since I hardly ever need to reboot. I will update when I solve it, meanwhile any tips are very welcome in the comments …
Item added.
########################################
Drop me a note if you find an error, thank me not 🙂

EDIT: Prune backups made with Relax and Recover #rear #linux

People keep asking me how I make incremental backups with rear, so I edited my posting about pruning backups with rear and inserted my config.

Prune backups …

If you don’t want to look that up you find my /etc/rear/local.conf below.

BACKUP=NETFS
OUTPUT=ISO
CDROM_SIZE=20
BACKUP_URL=nfs://xxx.xxx.xxx.xxx/volume2/LinuxDR/rear
ISO_DIR=/mnt/ISO
ISO_PREFIX=”rear-$HOSTNAME”
BACKUP_PROG_EXCLUDE=( ‘/tmp/*’ ‘/dev/shm/*’ ‘/mnt/*’ ‘/media/*’ $VAR_DIR/output/\* )
BACKUP_SELINUX_DISABLE=1
BACKUP_TYPE=incremental
FULLBACKUPDAY=Fri

I also rsync my backups to an external (Raspberry Pi 1B) server, will post about that later on.

Have fun backing up!

Monitoring cafeïne levels with @zabbix

Can somebody bring me some coffee? Please?

image

🙂

HOWTO: Prune backups made with Relax and Recover #rear #linux

I am backing up my Linux machines bare-metal with ReaR, in the form of weekly full backups and incrementals. ReaR does a great job but doesn’t cleanup old backups by itself, so if you do nothing you’re filing up your backup server sooner or later.

As I am lazy Linux sysadmin I don’t want to cleanup leftovers manualy, so I automated it 🙂 Here is how:

First: my /etc/rear/local.conf

BACKUP=NETFS
OUTPUT=ISO
CDROM_SIZE=20
BACKUP_URL=nfs://xxx.xxx.xxx.xxx/volume2/LinuxDR/rear
ISO_DIR=/mnt/ISO
ISO_PREFIX=”rear-$HOSTNAME”
BACKUP_PROG_EXCLUDE=( ‘/tmp/*’ ‘/dev/shm/*’ ‘/mnt/*’ ‘/media/*’ $VAR_DIR/output/\* )
BACKUP_SELINUX_DISABLE=1
BACKUP_TYPE=incremental
FULLBACKUPDAY=Fri
NETFS_PREFIX=”$HOSTNAME”root@lamp02:~#

This will create a full backup every friday, and incrementals on all other days.

In the picture below you see what happens if you let things run it’s course:

files_lamp02

I have got 11 backup archives of which I only need 4 (today) to recover when needed. The archives with an “F” in their name are full backups, the other ones (with “I”) are incrementals. There are also two important files here, basebackup.txt and timestamp.txt. Those two files actually do sort of the same: tell the system when the last full backup was made. ReaR needs this for restoring the system, using the correct files. I only need timestamp.txt for my cleanup job, but I could also use basebackup.txt for them. What’s in those files is not important to me, I use the time and date they where created or modified. Today is march 17, so every archive created before the last full backup (march 13 in this case) may be deleted.

To find out what files can be deleted you can issue the following command in the terminal:

find /media/nfs/lamp02/*tar.gz -type f ! -newer /media/nfs/lamp02/timestamp.txt

(make sure to adjust the path to your own situation!)

Output:

term_lamp02

To delete them issue the following command:

find /media/nfs/lamp02/*tar.gz -type f ! -newer /media/nfs/lamp02/timestamp.txt | xargs rm -f

(Again, adjust your paths!)

All files created before the last full backup will be gone, keeping your backup server clean 🙂
The only thing you have to do now is create a proper cronjob to automate this. Be sure the command runs AFTER the backups are complete!

For the best results you can do this daily. If you feel you want to keep your files longer, maybe a month, you can tweak around to accomplish that. Maybe I will update this posting with my own solution for that, although in my case I do not need it.

Happy Linux’ing!

(Edit: this posting is now in the official rear user documentation, which is kinda cool)
(Edit 2: people keep asking me how to make incremental backups with ReaR, so I inserted my /etc/rear/local.conf above)

Intel outside :-) (drone video)

Gimme some more Pi!

Ok, here it is, my brand new Raspberry Pi 2 Model B.
Specs:

A 900MHz quad-core ARM Cortex-A7 CPU
1GB RAM
Like the (Pi 1) Model B+, it also has:

4 USB ports
40 GPIO pins
Full HDMI port
Ethernet port
Combined 3.5mm audio jack and composite video
Camera interface (CSI)
Display interface (DSI)
Micro SD card slot
VideoCore IV 3D graphics core

When I start my project I’ll report. Probably I will test my download station on this one, and check out it’s performance. After that I will try some backup functions with plane rsync, or lftpd, and Bacula DR.

image

About the Asus Transformer T200 …

It can only run a 32 Bit OS, so do not try to install Windows 8 AMD64 software!!!
It can only run a 32 Bit OS, so do not try to install Windows 8 AMD64 software!!!
It can only run a 32 Bit OS, so do not try to install Windows 8 AMD64 software!!!
It can only run a 32 Bit OS, so do not try to install Windows 8 AMD64 software!!!
It can only run a 32 Bit OS, so do not try to install Windows 8 AMD64 software!!!
It can only run a 32 Bit OS, so do not try to install Windows 8 AMD64 software!!!
It can only run a 32 Bit OS, so do not try to install Windows 8 AMD64 software!!!
It can only run a 32 Bit OS, so do not try to install Windows 8 AMD64 software!!!
It can only run a 32 Bit OS, so do not try to install Windows 8 AMD64 software!!!
It can only run a 32 Bit OS, so do not try to install Windows 8 AMD64 software!!!
It can only run a 32 Bit OS, so do not try to install Windows 8 AMD64 software!!!
It can only run a 32 Bit OS, so do not try to install Windows 8 AMD64 software!!!

It won’t work, and it can waste your day 🙂

Greetings to PVA. Next time, buy something decent …

Five Reasons the MI6 Story is a Lie

by craig on June 14, 2015 https://www.craigmurray.org.uk/archives/2015/06/five-reasons-the-mi6-story-is-a-lie/

The Sunday Times has a story claiming that Snowden’s revelations have caused danger to MI6 and disrupted their operations. Here are five reasons it is a lie.

1) The alleged Downing Street source is quoted directly in italics. Yet the schoolboy mistake is made of confusing officers and agents. MI6 is staffed by officers. Their informants are agents. In real life, James Bond would not be a secret agent. He would be an MI6 officer. Those whose knowledge comes from fiction frequently confuse the two. Nobody really working with the intelligence services would do so, as the Sunday Times source does. The story is a lie.

2) The argument that MI6 officers are at danger of being killed by the Russians or Chinese is a nonsense. No MI6 officer has been killed by the Russians or Chinese for 50 years. The worst that could happen is they would be sent home. Agents’ – generally local people, as opposed to MI6 officers – identities would not be revealed in the Snowden documents. Rule No.1 in both the CIA and MI6 is that agents’ identities are never, ever written down, neither their names nor a description that would allow them to be identified. I once got very, very severely carpeted for adding an agents’ name to my copy of an intelligence report in handwriting, suggesting he was a useless gossip and MI6 should not be wasting their money on bribing him. And that was in post communist Poland, not a high risk situation.

3) MI6 officers work under diplomatic cover 99% of the time. Their alias is as members of the British Embassy, or other diplomatic status mission. A portion are declared to the host country. The truth is that Embassies of different powers very quickly identify who are the spies in other missions. MI6 have huge dossiers on the members of the Russian security services – I have seen and handled them. The Russians have the same. In past mass expulsions, the British government has expelled 20 or 30 spies from the Russian Embassy in London. The Russians retaliated by expelling the same number of British diplomats from Moscow, all of whom were not spies! As a third of our “diplomats” in Russia are spies, this was not coincidence. This was deliberate to send the message that they knew precisely who the spies were, and they did not fear them.

4) This anti Snowden non-story – even the Sunday Times admits there is no evidence anybody has been harmed – is timed precisely to coincide with the government’s new Snooper’s Charter act, enabling the security services to access all our internet activity. Remember that GCHQ already has an archive of 800,000 perfectly innocent British people engaged in sex chats online.

5) The paper publishing the story is owned by Rupert Murdoch. It is sourced to the people who brought you the dossier on Iraqi Weapons of Mass Destruction, every single “fact” in which proved to be a fabrication. Why would you believe the liars now?

There you have five reasons the story is a lie.