Server – Derek Demuro https://www.derekdemuro.com Software Engineer Tue, 03 Jun 2025 17:21:10 +0000 en-US hourly 1 160473225 Monitoring the kindgdom… ZABBIX! https://www.derekdemuro.com/2020/06/01/proxmox-ve-udev-spam-udevmonitor/ https://www.derekdemuro.com/2020/06/01/proxmox-ve-udev-spam-udevmonitor/#respond Mon, 01 Jun 2020 05:10:21 +0000 https://www.derekdemuro.com/?p=4166 For a while, we’ve been trying to get proper monitoring at TakeLAN, my home servers, and customers after a lot of work, tuneup and changes we finally made it we got the perfect monitoring setup.

Monitoring…

DHT22 sensor.

On the owned datacenters, we’ve installed DHT22 sensors to monitor intake, and exhaust of the room temperature, with that we can get an idea on the heat produced, and it’s circulation. (There’s another sensor above some servers for extra monitoring) but we don’t pay attention to those since we also monitor the temp of the servers.

This exact aluminum case with a Raspi4

Those and our friendly Raspberry Pi 4 were enough to monitor both UPS’s and the temperature of one of the rooms.

The changes gave us peace of mind that at low cost, we could also keep historical data on it and use it as a primary jump host for techs and admins.

Setting those DHT22 was easy enough, we wrote some apps to monitor both and to calibrate them precisely at the temperature inside the room and the humidity.
This Raspberry also is in charge of reporting the temperature on other sensors, so it must be operative; we should change this single point of failure. Still, the UPS’s don’t support this operation in USB, and I’m looking into multiplexing options to detect a failure of this device and switch over.

Backups!

It’s no secret, I run multiple servers in my house, and I also have a detached garage which comes very handy for remote backups!

The backup happens over powerline (yeah yeah, I have two gig network wires running to the garage, but powerline was more comfortable!)

Again, it’s a raspberry pi 4 with 2 10 TB hard drives plugged in who receive a copy of my zpool, media, storage, LXC containers, and QEMU backups.
I guess if we need to grow, we’ll continue to use USB since we don’t write full blast to it, and it’s mainly archival… if we are going to recover more than likely, we’ll pick up the drives and plug them straight to a machine.

Exact model being used for backups.

For this to happen, we had to retrofit the garage with an exhaust fan on the roof to keep the garage fresh enough for this device’s regular operation, and I also hooked them up in the roof gable to avoid vibrations of the air compressor and what not.

With this, we have covered the main points, monitoring, and backup.

Sync between datacenters

We opted for rsync + lsyncd.

This is the example we use to sync TX01 server to LA, KS and SEC.

-- General Settings
local sourcesandtargets = require('syncfolders')

settings {
        logfile = "/var/log/lsyncd.log",
        statusFile = "/var/run/lsyncd/lsyncd.status",
        pidfile = "/var/run/lsyncd/lsyncd.pid",
        maxDelays = 4000,
        insist = true,
        maxProcesses = 20
}

--------------------------------------------------------------------------
-- LAX TRANSFER DETAILS                                                 --
--------------------------------------------------------------------------
----------------------------------------------------------------- Bind Transfers

sync {
        default.rsyncssh,
        source = "/var/lib/bind",
        targetdir = "/var/lib/bind",
        host = "lsyncd-vmin.la.takelan.com",
        excludeFrom = "/etc/lsyncd/exclude",
        exclude = { "*.log", "*.tmp", "*~", "*.swp" },
        delete = "running",
        delay = 0,
        settings { maxProcesses = 1 },
        rsync = {
                binary = "/etc/lsyncd/locking_rsync.sh",
                backup = true,
                backup_dir = "/var/lsyncdbackup/var/lib/bind/",
                archive = true,
                links = true,
                update = true,
                temp_dir = "/tmp/",
        },
}

sync {
        default.rsyncssh,
        source = "/etc/bind",
        targetdir = "/etc/bind",
        host = "lsyncd-vmin.la.takelan.com",
        excludeFrom = "/etc/lsyncd/exclude",
        exclude = { "*.log", "*.tmp", "*~", "*.swp" },
        delete = "running",
        delay = 0,
        settings { maxProcesses = 1 },
        rsync = {
                binary = "/etc/lsyncd/locking_rsync.sh",
                backup = true,
                backup_dir = "/var/lsyncdbackup/etc/bind/",
                archive = true,
                links = true,
                update = true,
                temp_dir = "/tmp/",
        },
}
--------------------------------------------------------------------------
-- LAX TRANSFER DETAILS                                                 --
--------------------------------------------------------------------------

--------------------------------------------------------------------------
-- KAN TRANSFER DETAILS                                                 --
--------------------------------------------------------------------------
----------------------------------------------------------------- Bind Transfers

sync {
        default.rsyncssh,
        source = "/var/lib/bind",
        targetdir = "/var/lib/bind",
        host = "lsyncd-vmin.ks.takelan.com",
        excludeFrom = "/etc/lsyncd/exclude",
        exclude = { "*.log", "*.tmp", "*~", "*.swp" },
        delete = "running",
        delay = 0,
        settings { maxProcesses = 1 },
        rsync = {
                binary = "/etc/lsyncd/locking_rsync.sh",
                backup = true,
                backup_dir = "/var/lsyncdbackup/var/lib/bind/",
                archive = true,
                links = true,
                update = true,
                temp_dir = "/tmp/",
        },
}

sync {
        default.rsyncssh,
        source = "/etc/bind",
        targetdir = "/etc/bind",
        host = "lsyncd-vmin.ks.takelan.com",
        excludeFrom = "/etc/lsyncd/exclude",
        exclude = { "*.log", "*.tmp", "*~", "*.swp" },
        delete = "running",
        delay = 0,
        settings { maxProcesses = 1 },
        rsync = {
                binary = "/etc/lsyncd/locking_rsync.sh",
                backup = true,
                backup_dir = "/var/lsyncdbackup/etc/bind/",
                archive = true,
                links = true,
                update = true,
                temp_dir = "/tmp/",
        },
}
--------------------------------------------------------------------------
-- KAN TRANSFER DETAILS                                                 --
--------------------------------------------------------------------------

--------------------------------------------------------------------------
-- SEC TRANSFER DETAILS                                                 --
--------------------------------------------------------------------------
----------------------------------------------------------------- Default to transfer home files (mail, websites, etc)

for _, sourcesandtargets in ipairs( sourcesandtargets )
do
        sync {
                default.rsyncssh,
                source = sourcesandtargets,
                targetdir = sourcesandtargets,
                host = "lsyncd-vmin.nj.takelan.com",
                excludeFrom = "/etc/lsyncd/exclude",
                exclude = { "*.log", "*.tmp", "*~", "*.swp" },
                settings { maxProcesses = 1 },
                delay = 300,
                delete = "running",
                rsync = {
                        binary = "/etc/lsyncd/locking_rsync.sh",
                        backup = true,
                        backup_dir = "/var/lsyncdbackup/",
                        archive = true,
                        links = true,
                        update = true,
                        append_verify = true,
                        temp_dir = "/tmp/",
                },
        }
end

----------------------------------------------------------------- Bind Transfers

sync {
        default.rsyncssh,
        source = "/var/lib/bind",
        targetdir = "/var/lib/bind",
        host = "lsyncd-vmin.nj.takelan.com",
        excludeFrom = "/etc/lsyncd/exclude",
        exclude = { "*.log", "*.tmp", "*~", "*.swp" },
        delete = "running",
        delay = 0,
        rsync = {
                binary = "/etc/lsyncd/locking_rsync.sh",
                backup = true,
                backup_dir = "/var/lsyncdbackup/var/lib/bind/",
                archive = true,
                links = true,
                update = true,
                temp_dir = "/tmp/",
        },
}

sync {
        default.rsyncssh,
        source = "/etc/bind",
        targetdir = "/etc/bind",
        host = "lsyncd-vmin.nj.takelan.com",
        excludeFrom = "/etc/lsyncd/exclude",
        exclude = { "*.log", "*.tmp", "*~", "*.swp" },
        delete = "running",
        delay = 0,
        rsync = {
                binary = "/etc/lsyncd/locking_rsync.sh",
                backup = true,
                backup_dir = "/var/lsyncdbackup/etc/bind/",
                archive = true,
                links = true,
                update = true,
                temp_dir = "/tmp/",
        },
}

----------------------------------------------------------------- Apache Transfers

sync {
        default.rsyncssh,
        source = "/etc/apache2",
        targetdir = "/etc/apache2",
        host = "lsyncd-vmin.nj.takelan.com",
        excludeFrom = "/etc/lsyncd/exclude",
        exclude = { "*.log", "*.tmp", "*~", "*.swp" },
        delete = "running",
        rsync = {
                binary = "/etc/lsyncd/locking_rsync.sh",
                backup = true,
                backup_dir = "/var/lsyncdbackup/etc/apache2/",
                archive = true,
                links = true,
                update = true,
                temp_dir = "/tmp/",
        },
}

----------------------------------------------------------------- Logrotate Transfers

sync {
        default.rsyncssh,
        source = "/etc/logrotate.d",
        targetdir = "/etc/logrotate.d",
        host = "lsyncd-vmin.nj.takelan.com",
        excludeFrom = "/etc/lsyncd/exclude",
        exclude = { "*.log", "*.tmp", "*~", "*.swp" },
        delete = "running",
        rsync = {
                binary = "/etc/lsyncd/locking_rsync.sh",
                backup = true,
                backup_dir = "/var/lsyncdbackup/etc/logrotate.d/",
                archive = true,
                links = true,
                update = true,
                temp_dir = "/tmp/",
        },
}

----------------------------------------------------------------- Cronjobs Transfers

sync {
        default.rsyncssh,
        source = "/var/spool/cron/crontabs",
        targetdir = "/var/spool/cron/crontabs",
        host = "lsyncd-vmin.nj.takelan.com",
        excludeFrom = "/etc/lsyncd/exclude",
        exclude = { "*.log", "*.tmp", "*~", "*.swp" },
        delete = "running",
        rsync = {
                binary = "/etc/lsyncd/locking_rsync.sh",
                backup = true,
                backup_dir = "/var/lsyncdbackup/var/spool/cron/crontabs/",
                archive = true,
                links = true,
                update = true,
                temp_dir = "/tmp/",
        },
}

sync {
        default.rsyncssh,
        source = "/etc/cron.d",
        targetdir = "/etc/cron.d",
        host = "lsyncd-vmin.nj.takelan.com",
        excludeFrom = "/etc/lsyncd/exclude",
        exclude = { "*.log", "*.tmp", "*~", "*.swp" },
        delete = "running",
        rsync = {
                binary = "/etc/lsyncd/locking_rsync.sh",
                backup = true,
                backup_dir = "/var/lsyncdbackup/etc/cron.d/",
                archive = true,
                links = true,
                update = true,
                temp_dir = "/tmp/",
        },
}

sync {
        default.rsyncssh,
        source = "/etc/cron.daily",
        targetdir = "/etc/cron.daily",
        host = "lsyncd-vmin.nj.takelan.com",
        excludeFrom = "/etc/lsyncd/exclude",
        exclude = { "*.log", "*.tmp", "*~", "*.swp" },
        delete = "running",
        rsync = {
                binary = "/etc/lsyncd/locking_rsync.sh",
                backup = true,
                backup_dir = "/var/lsyncdbackup/etc/cron.daily/",
                archive = true,
                links = true,
                update = true,
                temp_dir = "/tmp/",
        },
}

sync {
        default.rsyncssh,
        source = "/etc/cron.hourly",
        targetdir = "/etc/cron.hourly",
        host = "lsyncd-vmin.nj.takelan.com",
        excludeFrom = "/etc/lsyncd/exclude",
        exclude = { "*.log", "*.tmp", "*~", "*.swp" },
        delete = "running",
        rsync = {
                binary = "/etc/lsyncd/locking_rsync.sh",
                backup = true,
                backup_dir = "/var/lsyncdbackup/etc/cron.hourly/",
                archive = true,
                links = true,
                update = true,
                temp_dir = "/tmp/",
        },
}

sync {
        default.rsyncssh,
        source = "/etc/cron.monthly",
        targetdir = "/etc/cron.monthly",
        host = "lsyncd-vmin.nj.takelan.com",
        excludeFrom = "/etc/lsyncd/exclude",
        exclude = { "*.log", "*.tmp", "*~", "*.swp" },
        delete = "running",
        rsync = {
                binary = "/etc/lsyncd/locking_rsync.sh",
                backup = true,
                backup_dir = "/var/lsyncdbackup/etc/cron.monthly/",
                archive = true,
                links = true,
                update = true,
                temp_dir = "/tmp/",
        },
}

sync {
        default.rsyncssh,
        source = "/etc/cron.weekly",
        targetdir = "/etc/cron.weekly",
        host = "lsyncd-vmin.nj.takelan.com",
        excludeFrom = "/etc/lsyncd/exclude",
        exclude = { "*.log", "*.tmp", "*~", "*.swp" },
        delete = "running",
        rsync = {
                binary = "/etc/lsyncd/locking_rsync.sh",
                backup = true,
                backup_dir = "/var/lsyncdbackup/etc/cron.weekly/",
                archive = true,
                links = true,
                update = true,
                temp_dir = "/tmp/",
        },
}

------- THIS ONE MUST USE RSYNC DIRECTLY!
sync {
        default.rsyncssh,
        source = "/etc",
        targetdir = "/etc",
        host = "lsyncd-vmin.nj.takelan.com",
        excludeFrom = "/etc/lsyncd/exclude",
        exclude = { "*.log", "*.tmp", "*~", "*.swp" },
        delete = "running",
        rsync = {
                binary = "/usr/bin/rsync",
                backup = true,
                backup_dir = "/var/lsyncdbackup/etc/crontab/",
                archive = true,
                links = true,
                update = true,
                _extra = { "--include=crontab", "--exclude=*" },
                temp_dir = "/tmp/",
        },
}
------- THIS ONE MUST USE RSYNC DIRECTLY!
----------------------------------------------------------------- Webmin Transfers

sync {
        default.rsyncssh,
        source = "/etc/webmin",
        targetdir = "/etc/webmin",
        host = "lsyncd-vmin.nj.takelan.com",
        excludeFrom = "/etc/lsyncd/exclude",
        exclude = { "*.log", "*.tmp", "*~", "*.swp" },
        delete = "running",
        rsync = {
                binary = "/etc/lsyncd/locking_rsync.sh",
                backup = true,
                backup_dir = "/var/lsyncdbackup/etc/webmin",
                archive = true,
                links = true,
                update = true,
                temp_dir = "/tmp/",
        },
}
--------------------------------------------------------------------------
-- SEC TRANSFER DETAILS                                                 --
--------------------------------------------------------------------------

This image may explain the sync.

Picture from zabbix to show the interconnection between datacenters.

As you see, we keep servers in sync for some critical services to make sure if a location fails, we can still respond, namely (SEC01 & TX01).

This way, we’ll ensure changes in one server will reach another now a classic sync problem should we copy open files?… No, not really, so for that, we create a wrapper around rsync.

#!/bin/bash
# REMEMBER TO MOUNT THIS FOLDER!
# sshfs#root@vmin01.tx.takelan.com:/opt/scripts/locks /opt/scripts/locks fuse delay_connect,defaults,idmap=user,IdentityFile=/root/.ssh/id_rsa,port=22,uid=0,gid=0,allow_other 0 0

### Definitions
OPENFILES_SLEEP_TIME=5
SLEEP_TIME=25
MAX_WAIT=3600
LOCKFILE_FOLDER='/opt/scripts/locks'
LOCK_FILE="$LOCKFILE_FOLDER/rsync-lock"
FOLDER_HOST='vmin01'
HOSTNAME=`hostname`
# Rsync wrapper to avoid copying partial - opened files...
RSYNC_BINARY="/usr/bin/rsync"

echo "Running locking with $@" >> /root/params.log
source=(${@: -2})

#---------------------------------------------------------------> Functions

# Call this function to decide the final destiny of the sync.
function checkOrDie() {
    mountpoint $LOCKFILE_FOLDER > /dev/null 2>&1
    if [ $? -ne 0 ]; then
        echo "Failure... exiting due to mountpoint failure."
        exit 0
    fi
    return 0
}

# Check if the mountpoint is mounted and working...
function checkMountpoint() {
    mountpoint $LOCKFILE_FOLDER > /dev/null 2>&1

    if [ $? -eq 0 ]; then
        echo "Success: Everything is okay."
    else
        umount -f $LOCKFILE_FOLDER
        mount $LOCKFILE_FOLDER
        checkOrDie
    fi
    return 0
}

# Remove lockfile if exists
function checkMountpointWorks() {
    if [ $HOSTNAME == $FOLDER_HOST ]; then
        echo "I'm the folder host... so skipping"
    else
        checkMountpoint
    fi
    return 0
}

# Remove lockfile if exists
function removeLockfile() {
    if [ -f $LOCK_FILE ]; then
        cat $LOCK_FILE | grep $HOSTNAME > /dev/null 2>&1
        if [ $? -eq 0 ]; then
            echo "Lockfile exists and its mine, removing"
            rm $LOCK_FILE
        fi
    fi
    return 0
}

# Removes the lock even if it belongs to someone else
function forceRemoveLock(){
    if [ -f $LOCK_FILE ]; then
        LOCKCONTENT=`cat $LOCK_FILE`
        echo "Lockfile exists and belongs to $LOCKCONTENT! since im forcing... removing"
        rm $LOCK_FILE
    fi
    return 0
}

# Create the lockfile with my hostname
function createLockfile() {
    echo $HOSTNAME > $LOCK_FILE
    return 0
}

# Check if source has open files
function checkForOpenFiles() {
        echo "Checking ${source[0]} for open files..."
        lsof +D "${source[0]}" | tail -n +2 | awk '{ print $4 ";" $9 }' | grep -v 'cwd;' | grep -v 'dovecot' > /dev/null 2>&1
        hasOpenFiles=$?
        echo "Has open files, returned $hasOpenFiles"
        while [ $hasOpenFiles -eq 0 ]; do
                echo "It seems like we do have open files... blocking sync!"
                # This could take a bit...
                lsof +D "${source[0]}" | tail -n +2 | awk '{ print $4 ";" $9 }' | grep -v 'cwd;' | grep -v 'dovecot' > /dev/null 2>&1
                hasOpenFiles=$?
                echo "Has open files, returned $hasOpenFiles"
                # Avoid CPU Pinning
                sleep $OPENFILES_SLEEP_TIME
                echo "Sleeping for $OPENFILES_SLEEP_TIME"
        # Differential sleep
        sleep $[ ( $RANDOM % 5 )  + 1 ]s
        done
        echo "Done checking... calling rsync!"
    return 0
}

#---------------------------------------------------------------> Functions

#---------------------------------------------------------------> Main Program

# Make sure the mount is working properly
checkMountpointWorks

# Check if file exists, wait for global lock to go away
if [ ! -f $LOCK_FILE ]; then
    echo "Executing lock..."
    createLockfile
        echo "Checking for open files..."
    echo "Syncing!"

# Wait for the lock file to expire or until removed.
else
    NUM_SECS=$(( $(date +%s) - $(stat -c %Y $LOCK_FILE) ))
    while [ -f $LOCK_FILE ] && (( $NUM_SECS < $MAX_WAIT )); do
        sleep $SLEEP_TIME
        # Differential sleep
        sleep $[ ( $RANDOM % 10 )  + 1 ]s
        if [ -f $LOCK_FILE ]; then
            NUM_SECS=$(( $(date +%s) - $(stat -c %Y $LOCK_FILE) ))
            echo "Lockfile exists for: $NUM_SECS seconds..."
        else
            break
            fi
    done
    echo "Maximum wait reached or file removed on remote end"
    forceRemoveLock
    createLockfile
    echo "Syncing!"
fi

# Stop to check for open files
checkForOpenFiles

# Start syncing
$RSYNC_BINARY "$@"
rsync_res=$?

# Cleanup
echo "Rsync finished with status $rsync_res..."
removeLockfile
exit $rsync_res
#---------------------------------------------------------------> Main Program

This script will wrap the lsync rsync call and ensure that the origin doesn’t currently have open files, and thus we can initiate a copy.
Yes, this won’t guarantee that at that EXACT moment, there are no open files, but… remember it’s just a bandwidth saving operation, not a critical one.

I got tired of typing but this covers a large part of our monitoring and maintainance.

See ya in post 2!

]]>
https://www.derekdemuro.com/2020/06/01/proxmox-ve-udev-spam-udevmonitor/feed/ 0 4166
Optimizing server for APC, and PHP. https://www.derekdemuro.com/2014/10/10/optimizing-server-for-apc-and-php/ https://www.derekdemuro.com/2014/10/10/optimizing-server-for-apc-and-php/#respond Fri, 10 Oct 2014 06:52:01 +0000 https://www.derekdemuro.com/?p=3331 Many times our servers struggle to answer to connections in time, this is why we need to optimize.

First we assume we have installed Virtualmin on Debian 7.

Second, we assume you know some of PHP and common Linux utilities.

We’ll install memcached and php5-memcache [Memcache plugin for memcached].

php5-memcache and memcached

apt-get install memcached php5-memcache

We are assuming you’ll use Memcached on the local server, so just restart the service as follows:

/etc/init.d/memcached restart

Now we restart apache… to load up everything correctly including PHP.

/etc/init.d/apache2 restart

Now lets get dirty with APC!

apt-get install php-pear

This will allow us to build new modules into php.

Now we need some dependencies to be able to compile APC.

apt-get install php5-dev apache2-prefork-dev build-essential

A long list of dependencies you’ll receive, accept them, they are GCC and others.

Now lets build APC!

pecl install apc

As I don’t have the rest on my terminal right now…

server2:~# pecl install apc
downloading APC-3.0.17.tgz ...
Starting to download APC-3.0.17.tgz (116,058 bytes)
.........................done: 116,058 bytes
47 source files, building
running: phpize
Configuring for:
PHP Api Version:         20041225
Zend Module Api No:      20060613
Zend Extension Api No:   220060519
Use apxs to set compile flags (if using APC with Apache)? [yes] : <-- ENTER

[...]

----------------------------------------------------------------------
Libraries have been installed in:
   /var/tmp/pear-build-root/APC-3.0.17/modules

If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
   - add LIBDIR to the `LD_LIBRARY_PATH' environment variable
     during execution
   - add LIBDIR to the `LD_RUN_PATH' environment variable
     during linking
   - use the `-Wl,--rpath -Wl,LIBDIR' linker flag
   - have your system administrator add LIBDIR to `/etc/ld.so.conf'

See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
----------------------------------------------------------------------

Build complete.
(It is safe to ignore warnings about tempnam and tmpnam).

running: make INSTALL_ROOT="/var/tmp/pear-build-root/install-APC-3.0.17" install
Installing shared extensions:     /var/tmp/pear-build-root/install-APC-3.0.17/usr/lib/php5/20060613+lfs/
running: find "/var/tmp/pear-build-root/install-APC-3.0.17" -ls
998152    4 drwxr-xr-x   3 root     root         4096 Mar 28 15:23 /var/tmp/pear-build-root/install-APC-3.0.17
998214    4 drwxr-xr-x   3 root     root         4096 Mar 28 15:23 /var/tmp/pear-build-root/install-APC-3.0.17/usr
998215    4 drwxr-xr-x   3 root     root         4096 Mar 28 15:23 /var/tmp/pear-build-root/install-APC-3.0.17/usr/lib
998216    4 drwxr-xr-x   3 root     root         4096 Mar 28 15:23 /var/tmp/pear-build-root/install-APC-3.0.17/usr/lib/php5
998217    4 drwxr-xr-x   2 root     root         4096 Mar 28 15:23 /var/tmp/pear-build-root/install-APC-3.0.17/usr/lib/php5/20060613+lfs
998213  416 -rwxr-xr-x   1 root     root       418822 Mar 28 15:23 /var/tmp/pear-build-root/install-APC-3.0.17/usr/lib/php5/20060613+lfs/apc.so

Build process completed successfully
Installing '/var/tmp/pear-build-root/install-APC-3.0.17//usr/lib/php5/20060613+lfs/apc.so'
install ok: channel://pecl.php.net/APC-3.0.17
You should add "extension=apc.so" to php.ini
server2:~#

[FROM https://www.howtoforge.com/apc-php5-apache2-debian-etch](link is external)

If this didn’t go well, just go with apt-get install php-apc !
Now its when everything goes fuzzy… as we use virtualservers, you’ll need to configure each virtualserver APC individually and decide how much cache etc.

In our case, lets go with a client.

This should be good enough for you to start enjoying APC.

At your php.ini add this information: [if using vhosts with virtualmin…] cd /home/user/etc/->Php.ini in here.

[APC]
extension=apc.so
apc.enabled=1
apc.shm_segments=1
apc.shm_size=256M ===> This value depends on kernel.shmmax in our case would read 268435456 that would be around 256M.
;cat /proc/sys/kernel/shmmax => to read the size
apc.optimization=0
;Control TTL of Cache
apc.ttl=108000
apc.user_ttl=108000
apc.gc_ttl=108000
apc.cache_by_default=1
apc.filters="-/home/user/public_html/apc/apc\.php$"
apc.slam_defense=0
apc.use_request_time=1
apc.mmap_file_mask=/tmp/apc-user.XXXXXX
;apc.mmap_file_mask=/dev/zero
apc.file_update_protection=2
apc.enable_cli=1
apc.max_file_size=5M
;WARNING => APC.STAT checks if file changed every time b4 opening.
apc.stat=0
apc.write_lock=1
apc.report_autofilter=0
apc.include_once_override=0
apc.rfc1867=0
apc.rfc1867_prefix=upload_
apc.rfc1867_name=APC_UPLOAD_PROGRESS
apc.rfc1867_freq=0
apc.rfc1867_ttl=3600
apc.lazy_classes=0
apc.lazy_functions=0

I’d also recommend you changing the swappiness of the server to a low value, remember IO is a killer… and it’s better to max out the ram, and leave the swap for emergencies.

Normally it would be set to 60.

My test’s proven for my configurations its best to have it ~15 – ~30.

To check the actual value:

cat /proc/sys/vm/swappiness

That should print: 60 on the screen.

To change it on the fly and test it:

sysctl vm.swappiness=[value]

On reboot that will reset to default, to make it stick:

edit /etc/sysctl.conf

and add at the end of the file:

vm.swappiness=[value]

Now that will stick on every restart.

Stay tuned, i’ll keep expanding my guide into getting the best out of your servers.

]]>
https://www.derekdemuro.com/2014/10/10/optimizing-server-for-apc-and-php/feed/ 0 3331
Tinkering around with .htaccess Apache to block certain requests. https://www.derekdemuro.com/2013/10/08/tinkering-around-with-htaccess-apache-to-block-certain-requests/ https://www.derekdemuro.com/2013/10/08/tinkering-around-with-htaccess-apache-to-block-certain-requests/#respond Tue, 08 Oct 2013 07:13:09 +0000 https://www.derekdemuro.com/?p=3406 Okay, at times, UserAgents can hit annoyingly apache, so what a better way to stop them from transferring the whole site for a pointless reason

So yeah, what I’ve done is .htaccess and block them, in my case CURL and Pingdom… why?…

Okay, CURL can be used to do automated actions on your website through POST and GET… and that’s for us a security hole, and even more, a pain in the bum… so what we’ve done is block it, so they receive a 403 Forbidden page.

And you can get it to do many things… instead of load a fast webpage… as redirect and many other things.

Apart from this, we’ve been tinkering around with parsing apache logs to find recurrent automated scripts that load us up and block them for some time, depending on what they are doing with the website; so far, so good. And use FIREWALL rules to do so to minimize the impact to the minimum.

Anyways how its done.

Create a .htaccess file in the main folder of your site [In our case Apache].

#
# Apache/PHP/Drupal settings:
#
 
# Protect files and directories from prying eyes.
<filesmatch>
  Order allow,deny
</filesmatch>
 
# Don't show directory listings for URLs which map to a directory.
Options -Indexes
 
# Follow symbolic links in this directory.
Options +FollowSymLinks
 
# Make Drupal handle any 404 errors.
ErrorDocument 404 /index.php
 
# Set the default handler.
DirectoryIndex index.php index.html index.htm
 
# Override PHP settings that cannot be changed at runtime. See
# sites/default/default.settings.php and drupal_environment_initialize() in
# includes/bootstrap.inc for settings that can be changed at runtime.
 
# PHP 5, Apache 1 and 2.
<ifmodule mod_php5.c="">
  php_flag magic_quotes_gpc                 off
  php_flag magic_quotes_sybase              off
  php_flag register_globals                 off
  php_flag session.auto_start               off
  php_value mbstring.http_input             pass
  php_value mbstring.http_output            pass
  php_flag mbstring.encoding_translation    off
</ifmodule>
 
# Requires mod_expires to be enabled.
<ifmodule mod_expires.c="">
  # Enable expirations.
  ExpiresActive On
 
  # Cache all files for 2 weeks after access (A).
  ExpiresDefault A1209600
 
  <filesmatch>
    # Do not allow PHP scripts to be cached unless they explicitly send cache
    # headers themselves. Otherwise all scripts would have to overwrite the
    # headers set by mod_expires if they want another caching behavior. This may
    # fail if an error occurs early in the bootstrap process, and it may cause
    # problems if a non-Drupal PHP file is installed in a subdirectory.
    ExpiresActive Off
  </filesmatch></ifmodule>
 
# Various rewrite rules.
<ifmodule mod_rewrite.c="">
  RewriteEngine on
 
  # Set "protossl" to "s" if we were accessed via https://.  This is used later
  # if you enable "www." stripping or enforcement, in order to ensure that
  # you don't bounce between http and https.
  RewriteRule ^ - [E=protossl]
  RewriteCond %{HTTPS} on
  RewriteRule ^ - [E=protossl:s]
 
  # Make sure Authorization HTTP header is available to PHP
  # even when running as CGI or FastCGI.
  RewriteRule ^ - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
 
  # Block access to "hidden" directories whose names begin with a period. This
  # includes directories used by version control systems such as Subversion or
  # Git to store control files. Files whose names begin with a period, as well
  # as the control files used by CVS, are protected by the FilesMatch directive
  # above.
  #
  # NOTE: This only works when mod_rewrite is loaded. Without mod_rewrite, it is
  # not possible to block access to entire directories from .htaccess, because
  # <directorymatch> is not allowed here.
  #
  # If you do not have mod_rewrite installed, you should remove these
  # directories from your webroot or otherwise protect them from being
  # downloaded.
  RewriteRule "(^|/)\." - [F]
 
  #Block CURL as getting too much load from curl
  BrowserMatchNoCase curl dndurl
  BrowserMatchNoCase Pingdom dndurl
  Order Deny,Allow
  Deny from env=dndurl
 
  # If your site can be accessed both with and without the 'www.' prefix, you
  # can use one of the following settings to redirect users to your preferred
  # URL, either WITH or WITHOUT the 'www.' prefix. Choose ONLY one option:
  #
  # To redirect all users to access the site WITH the 'www.' prefix,
  # (https://example.com/... will be redirected to https://www.example.com/...)
  # uncomment the following:
   RewriteCond %{HTTP_HOST} .
   RewriteCond %{HTTP_HOST} !^www\. [NC]
   RewriteRule ^ http%{ENV:protossl}://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
  #
  # To redirect all users to access the site WITHOUT the 'www.' prefix,
  # (https://www.example.com/... will be redirected to https://example.com/...)
  # uncomment the following:
  # RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC]
  # RewriteRule ^ http%{ENV:protossl}://%1%{REQUEST_URI} [L,R=301]
 
  # Modify the RewriteBase if you are using Drupal in a subdirectory or in a
  # VirtualDocumentRoot and the rewrite rules are not working properly.
  # For example if your site is at https://example.com/drupal uncomment and
  # modify the following line:
  # RewriteBase /drupal
  #
  # If your site is running in a VirtualDocumentRoot at https://example.com/,
  # uncomment the following line:
  # RewriteBase /
  ### BOOST START ###
 
  # Allow for alt paths to be set via htaccess rules; allows for cached variants (future mobile support)
  RewriteRule .* - [E=boostpath:normal]
 
  # Caching for anonymous users
  # Skip boost IF not get request OR uri has wrong dir OR cookie is set OR request came from this server OR https request
  RewriteCond %{REQUEST_METHOD} !^(GET|HEAD)$ [OR]
  RewriteCond %{REQUEST_URI} (^/(admin|cache|misc|modules|sites|system|openid|themes|node/add|comment/reply))|(/(edit|user|user/(login|password|register))$) [OR]
  RewriteCond %{HTTPS} on [OR]
  RewriteCond %{HTTP_COOKIE} DRUPAL_UID [OR]
  RewriteCond %{ENV:REDIRECT_STATUS} 200
  RewriteRule .* - [S=7]
 
  # GZIP
  RewriteCond %{HTTP:Accept-encoding} !gzip
  RewriteRule .* - [S=3]
  RewriteCond %{DOCUMENT_ROOT}/cache/%{ENV:boostpath}/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\.html -s
  RewriteRule .* cache/%{ENV:boostpath}/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\.html [L,T=text/html,E=no-gzip:1]
  RewriteCond %{DOCUMENT_ROOT}/cache/%{ENV:boostpath}/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\.xml -s
  RewriteRule .* cache/%{ENV:boostpath}/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\.xml [L,T=text/xml,E=no-gzip:1]
  RewriteCond %{DOCUMENT_ROOT}/cache/%{ENV:boostpath}/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\.json -s
  RewriteRule .* cache/%{ENV:boostpath}/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\.json [L,T=text/javascript,E=no-gzip:1]
 
  # NORMAL
  RewriteCond %{DOCUMENT_ROOT}/cache/%{ENV:boostpath}/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\.html -s
  RewriteRule .* cache/%{ENV:boostpath}/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\.html [L,T=text/html]
  RewriteCond %{DOCUMENT_ROOT}/cache/%{ENV:boostpath}/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\.xml -s
  RewriteRule .* cache/%{ENV:boostpath}/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\.xml [L,T=text/xml]
  RewriteCond %{DOCUMENT_ROOT}/cache/%{ENV:boostpath}/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\.json -s
  RewriteRule .* cache/%{ENV:boostpath}/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\.json [L,T=text/javascript]
 
  ### BOOST END ###
 
  # Pass all requests not referring directly to files in the filesystem to
  # index.php. Clean URLs are handled in drupal_environment_initialize().
  RewriteCond %{REQUEST_FILENAME} !-f
  RewriteCond %{REQUEST_FILENAME} !-d
  RewriteCond %{REQUEST_URI} !=/favicon.ico
  RewriteRule ^ index.php [L]
 
  # Rules to correctly serve gzip compressed CSS and JS files.
  # Requires both mod_rewrite and mod_headers to be enabled.
  <ifmodule mod_headers.c="">
    # Serve gzip compressed CSS files if they exist and the client accepts gzip.
    RewriteCond %{HTTP:Accept-encoding} gzip
    RewriteCond %{REQUEST_FILENAME}\.gz -s
    RewriteRule ^(.*)\.css $1\.css\.gz [QSA]
 
    # Serve gzip compressed JS files if they exist and the client accepts gzip.
    RewriteCond %{HTTP:Accept-encoding} gzip
    RewriteCond %{REQUEST_FILENAME}\.gz -s
    RewriteRule ^(.*)\.js $1\.js\.gz [QSA]
 
    # Serve correct content types, and prevent mod_deflate double gzip.
    RewriteRule \.css\.gz$ - [T=text/css,E=no-gzip:1]
    RewriteRule \.js\.gz$ - [T=text/javascript,E=no-gzip:1]
 
    <filesmatch>
      # Serve correct encoding type.
      Header set Content-Encoding gzip
      # Force proxies to cache gzipped & non-gzipped css/js files separately.
      Header append Vary Accept-Encoding
    </filesmatch></ifmodule></directorymatch></ifmodule>
]]>
https://www.derekdemuro.com/2013/10/08/tinkering-around-with-htaccess-apache-to-block-certain-requests/feed/ 0 3406
Server Monitor little dirty script. https://www.derekdemuro.com/2013/08/02/server-monitor-little-dirty-script/ https://www.derekdemuro.com/2013/08/02/server-monitor-little-dirty-script/#respond Fri, 02 Aug 2013 04:25:28 +0000 https://www.derekdemuro.com/?p=2031 #!/bin/bash ################################################################################ # Derek Demuro, this script is given as is, CopyLEFT # ################################################################################ ################################################################################ # README LEAME # ################################################################################ # This script will keep checking load, and log it. # Once the log hits the high load, its time to shutdown services and recall # memory, this way we avoid running into a kernel panic. # If we had DDOS attack, as services are down, the server won't be killed # Once the attack ends, the load will come down, and the services will load up # again # # The script will be modified later on to use functions, but not for now. ############################To Set Up########################################### # # To set this script up, you'll need to add it to a cronjob to run on boot # Most linux distros will allow a param <a href="https://twitter.com/reboot" class="twitter-atreply ext">@reboot<span class="ext"><span class="element-invisible"> (link is external)</span></span></a> /(path)/servermon.sh # So... crontab -e # At the bottom add: <a href="https://twitter.com/reboot" class="twitter-atreply ext">@reboot<span class="ext"><span class="element-invisible"> (link is external)</span></span></a> /(path)/servermon.sh # REMEMBER TO ADD EXECUTABLE BIT TO THE FILE (777 Permissions) ################################################################################ # SCRIPT CONFIGURATION # ################################################################################ #Initialize counter times=0 #How many times to run before clearing log linestoclear=$1 #Critical load to stop everything and restart highload=$2 #Wait till load gets to this load lowload=$3 #Wait for highload high=$4 #Wait for midload mid=$5 #Wait more for lowload low=$6 ################################################################################ #Log how many lines to log echo 'Script will run ' $1 ' times then will clear itself, High load: ' $highload 'Will wait to: ' $lowload "HighWait: $high, MidWait: $mid, LowWait: $low" while [ 0 ] do #add 1 to times times=`expr $times + 1` #check = load at the moment check=`cat /proc/loadavg | sed 's/\./ /' | awk '{print $1}'` date=`date` #Print of the time the cron started. echo 'Load at the moment' $check 'at date ' $date 'script ran: ' $times ' times' #How many times did it run? if [ $times -eq $linestoclear ] then #Clean the log! echo 'Log cleared' > hsysmon.log echo 'Script will run ' $1 ' times then will clear itself' times=0 fi #If load > 10 if [ $check -gt $highload ] then #Apache STOP var=`service apache2 stop` #Moment of shutdown date=`date` msg='Highload autorestart did run and load average was: ' echo 'At: ' $date 'With load: ' $check 'services shutted down successfoully' #Sleep while HIGH LOAD while [ $check -gt $lowload ] do if [ $check -gt $high ] then sleep $high else if [ $check -gt $mid ] then sleep $mid else if [ $check -gt $low ] then sleep $low else sleep 3 fi fi fi #New check check=`cat /proc/loadavg | sed 's/\./ /' | awk '{print $1}'` done #Continue echo 'Restarting on load: ' $check #Apache START apacheres=`service apache2 start` #Prints echo '#######################Apache Restart Log###########################' echo $apacheres echo '#######################Apache Restart Log###########################' echo '###########-----------------------------------------################' fi #Keep checking load sleep 3 done
]]>
https://www.derekdemuro.com/2013/08/02/server-monitor-little-dirty-script/feed/ 0 2031
Add Backup Mail Server Virtualmin https://www.derekdemuro.com/2013/06/21/add-backup-mail-server-virtualmin/ https://www.derekdemuro.com/2013/06/21/add-backup-mail-server-virtualmin/#respond Fri, 21 Jun 2013 07:29:35 +0000 https://www.derekdemuro.com/?p=3481 There are various guides online for setting up Postfix as a backup (hold and forward) mail server, which knows which accounts are valid on the primary (to prevent backscatter spam, so it only accepts emails for actual accounts on the primary mail server). All of them are slightly different, and none of them worked out of the box for me. This is how I did it – this method is facilitated by using Virtualmin on the primary since it has already generated map files that we can use.

We need two files on the backup mail server: a list of domains to relay and a list of valid email accounts. With a little tweaking, we can copy a couple of Virtualmin configuration files over from the primary, and the backup will automatically know what the valid accounts are.

Note: if you’re using Virtualmin, make sure your alias domains are copying the accounts from the domain they are aliasing rather than running a catch-all domain (which is the default setting); otherwise, it rather defeats the purpose of specifying the valid accounts. To set this up on future-created domains, go to System Settings, Server Templates, select your template(s), go to Mail for a domain, and change “Mail alias mode for alias domains” to “Copy aliases from target” and Save. If you have existing catch-all alias domains, if you select that domain and go to Server Configuration, Email Settings, you can change the Mail aliases mode.

So, back to those two files. They need to be files that Postfix can map into a hash file for itself to use. The standard way of doing this is to have the domain (or email address) and then space and then the wordOK, so each line would be:
user@domain.com OK

However, the second part can be anything for the hash to work, it doesn’t have to be, and Virtualmin creates a virtual mapping file that maps email addresses to accounts that we can use as an excellent relay recipient list. The data is at/etc/postfix/virtualso the first step is to copy that file to the backup as/etc/postfix/relay_recipients. (This will all need to be done as therootuser.)

[primary] scp /etc/postfix/virtual user@backupmx.com:~
[backup] cp /home/user/virtual /etc/postfix/relay_recipients

Then we need to tell Postfix to use those email addresses, so as root:

[backup] postmap /etc/postfix/relay_recipients

If that fails, you may need to adjust the permissions of/etc/postfixso that it can create the .db file (and then rerun the command):

[backup] chmod 777 /etc/postfix
[backup] postmap /etc/postfix/relay_recipients

Then update the Postfix configuration and reload it:

[backup] postconf -e "relay_recipient_maps = hash:/etc/postfix/relay_recipients"
[backup] postfix reload

Next, we need a list of valid domains to relay. Virtualmin does create a map of domains for itself with the formatdomain.com=1234567890, so all we need to do it replace the equals sign with space, and we have a valid map file.

[primary] scp /etc/webmin/virtual-server/map.dom user@backupmx.com:~
[backup] sed -i 's/=/ /g' /home/user/map.dom
[backup] cp /home/user/map.dom /etc/postfix/relay_domains

Then similar to above, update the config and reload:

[backup] postmap /etc/postfix/relay_domains
[backup] postconf -e "relay_domains = hash:/etc/postfix/relay_domains"
[backup] postfix reload

That’s it! If there were no problems, they are now in sync.

You don’t want to have to do this manually each time, so we need to set up an ssh key-pair so that you don’t have to enter your password and then create scripts that we will run automatically every few minutes to retain the sync.

Creating a passwordless key-pair is pretty simple. Type in:

[primary] ssh-keygen -t rsa

Use the default info and no password. Then copy to the backup:

[primary] ssh-copy-id -i ~/.ssh/id_rsa.pub user@backup.com

Enter the remote password. All done.

Now we need to create the scripts:

[primary] vi /home/user/backupmx.sh

What we’ll do is check if Virtualmin’s files are newer than the ones we last copied to the backup and, if so, then copy over the original data (having updated the domain map). So paste in:

#!/bin/sh
# postfix/vmin backup mx file 1/2 (primary)
# copy virtual (valid email addresses)
 if test /etc/postfix/virtual -nt /home/user/virtual
  then
  cp /etc/postfix/virtual /home/user/virtual
  scp /home/user/virtual user@backupmx.com:~
 fi
# copy map.dom (list of domains)
 if test /etc/webmin/virtual-server/map.dom -nt /home/user/map.dom
  then
  cp /etc/webmin/virtual-server/map.dom /home/user/map.dom
  sed -i 's/=/ /g' /home/user/map.dom
  scp /home/user/map.dom user@backupmx.com:~
 fi

And a similar script on the backup – check if the files are newer and if so, copy them and update the config:

[backup] vi /home/user/backupmx.sh

And paste in:

#!/bin/sh
# postfix/vmin backup mx file 2/2 (backup)
# copy virtual (valid email addresses)
 if test /home/user/virtual -nt /etc/postfix/relay_recipients
  then
  cp /home/user/virtual /etc/postfix/relay_recipients
  /usr/sbin/postmap /etc/postfix/relay_recipients
  /usr/sbin/postfix reload
 fi
# copy map.dom (list of domains)
 if test /home/user/map.dom -nt /etc/postfix/relay_domains
  then
  cp /home/user/map.dom /etc/postfix/relay_domains
  /usr/sbin/postmap /etc/postfix/relay_domains
  /usr/sbin/postfix reload
 fi

Then add those files to the cron on both systems

[primary] crontab -e

And add the line (to run every 5 minutes):

*/5 * * * * /home/user/backupmx.sh

Do the same on the backup computer.

Voila. Automated secondary mail server.

]]>
https://www.derekdemuro.com/2013/06/21/add-backup-mail-server-virtualmin/feed/ 0 3481
Removing Linux User https://www.derekdemuro.com/2013/05/03/removing-linux-user/ https://www.derekdemuro.com/2013/05/03/removing-linux-user/#respond Fri, 03 May 2013 07:34:54 +0000 https://www.derekdemuro.com/?p=3546 How to remove user from a linux (system) safetly:

Keep in mind the following process is to make sure a user is SAFELY removed from the system, if you want to remove the user, jump to “To delete user account called [user], enter:”

The following is the recommended procedure to delete a user from the Linux server. First, lock a user account, enter:

passwd -l username

Backup files from /home/user to /backup:

tar -zcvf /backup/account/deleted/[user].$uid.$now.tar.gz /home/[user]/

Please replace $uid, $now with actual UID and date/time. userdel command will not allow you to remove an account if the user is currently logged in. You must kill any running processes which belong to an account that you are deleting, enter:

pgrep -u [user]
ps -fp $(pgrep -u [user])
killall -KILL -u [user]

To delete user account called [user], enter:

userdel -r [user]

Delete at jobs, enter

find /var/spool/at/ -name "[^.]*" -type f -user [user] -delete

To remove cron jobs, enter:

crontab -r -u [user]

To remove print jobs, enter:

lprm [user]

To find all files owned by user [user], enter:

find / -user [user] -print

You can find file owned by a user called [user] and change its ownership as follows:

find / -user [user] -exec chown newUserName:newGroupName {} \;


]]>
https://www.derekdemuro.com/2013/05/03/removing-linux-user/feed/ 0 3546