Personal – Derek Demuro https://www.derekdemuro.com Software Engineer Tue, 03 Jun 2025 17:12:40 +0000 en-US hourly 1 160473225 Working with GPT Partition tables, CRC32’s and byte-data in hard drives. https://www.derekdemuro.com/2017/06/30/working-with-gpt-partition-tables-crc32s-and-byte-data-in-hard-drives/ https://www.derekdemuro.com/2017/06/30/working-with-gpt-partition-tables-crc32s-and-byte-data-in-hard-drives/#respond Fri, 30 Jun 2017 06:18:31 +0000 https://www.derekdemuro.com/?p=3161 Calculating CRC32 for GPT Volumes.
GPT vs MBR
MBR vs GPT Partitioning

Today while having to manually calculate the CRC for the GPT header and the GPT partition table, I was unable to find how to perform such actions manually (by hand literally…).

GPT Volume Structure (Wikipedia Link(link is external))

From the image we can see, we can tell we have:

  1. 512 Bytes worth of emptiness (Protective MBR, ending in 55AA).
  2. 512 Bytes worth of the GPT Header.
  3. 92 bytes is the actual header the rest is unused.
  4. 128 Bytes that represent each partition.

(The previous copied at the end of the drive backwards).

OffsetLengthContents
0 (0x00)8 bytesSignature (“EFI PART”, 45h 46h 49h 20h 50h 41h 52h 54h or 0x5452415020494645ULL[a](link is external) on little-endian(link is external) machines)
8 (0x08)4 bytesRevision (for GPT version 1.0 (through at least UEFI version 2.3.1), the value is 00h 00h 01h 00h)
12 (0x0C)4 bytesHeader size in little endian (in bytes, usually 5Ch 00h 00h 00h or 92 bytes)
16 (0x10)4 bytesCRC32(link is external)/zlib of header (offset +0 up to header size) in little endian, with this field zeroed during calculation
20 (0x14)4 bytesReserved; must be zero
24 (0x18)8 bytesCurrent LBA (location of this header copy)
32 (0x20)8 bytesBackup LBA (location of the other header copy)
40 (0x28)8 bytesFirst usable LBA for partitions (primary partition table last LBA + 1)
48 (0x30)8 bytesLast usable LBA (secondary partition table first LBA – 1)
56 (0x38)16 bytesDisk GUID (also referred as UUID(link is external) on UNIXes)
72 (0x48)8 bytesStarting LBA of array of partition entries (always 2 in primary copy)
80 (0x50)4 bytesNumber of partition entries in array
84 (0x54)4 bytesSize of a single partition entry (usually 80h or 128)
88 (0x58)4 bytesCRC32/zlib of partition array in little endian
92 (0x5C)*Reserved; must be zeroes for the rest of the block (420 bytes for a sector size of 512 bytes; but can be more with larger sector sizes)
 

From Wikipedia, we can see the great table above that represents all the contents of the GPT header, for this, let’s check what the structure of a partition entry is so we can move on with a good base.

OffsetLengthContents
0 (0x00)16 bytesPartition type GUID
16 (0x10)16 bytesUnique partition GUID
32 (0x20)8 bytesFirst LBA (little endian(link is external))
40 (0x28)8 bytesLast LBA (inclusive, usually odd)
48 (0x30)8 bytesAttribute flags (e.g. bit 60 denotes read-only)
56 (0x38)72 bytesPartition name (36 UTF-16(link is external)LE code units)

Just for completeness

BitContent
0Platform required (required by the computer to function properly, OEM partition for example, disk partitioning(link is external) utilities must preserve the partition as is)
1EFI firmware should ignore the content of the partition and not try to read from it
2Legacy BIOS bootable (equivalent to active flag (typically bit 7 set) at offset +0h(link is external) in partition entries of the MBR partition table(link is external))[9](link is external)
3–47Reserved for future use
48–63Defined and used by the individual partition type
 
BitContent
60Read-only
61Shadow copy (of another partition)
62Hidden
63No drive letter (i.e. do not automount)
 

How do I make sure I get a valid GPT Partition Table?

First, make sure you are familiar with dd (data destroyer haha) is an excellent tool, and you should be familiar with it if you’re going to be playing with data on a hard drive hands-on.
Quick recap: seek, skip, bs, count, conv, if, of, should all be known to you by hand, blindfolded.

  1. seek: skip BLOCKS obs-sized blocks at start of output file.
  2. skip: skip BLOCKS obs-sized blocks at start of input file.
  3. count: copy only BLOCKS input blocks.
  4. conv: convert the file as per the comma-separated symbol list.
  5. bs: read and write BYTES at a time.
  6. if: input file.
  7. of: output file.
  8. conv=notrunc: I’m mentioning this one specifically since its 100% required to AVOID blowing up your data all at once successfully.

You will need a crc32 application to perform the calculation on dd generated binaries also because we’re going to have to write that data by hand to the files. And also, I’d recommend looking into an excuse to edit those files manually.

Remember that MANY times you’ll have to know that data on the hard drive is written backward as we usually write it in natural language. For example, if our CRC result were to be the following hex (F7E58D1F we would separate in bytes so F7 E5 8D 1F, then we’d write it as follows: 1F 8D E5 F7 in the hex editor in the appropriate places, so yeah, REMEMBER not a mirror but byte-backward, so if you do F1D85E7F you did it wrong!).

So let’s take, for example, a dump of the GPT Header and Partition table, so that’d be: 16896 bytes or 33 sectors, which would be dividing the number of bytes by 512.

Using OKTETA in Linux, you can find a quick way to calculate the checksum of a highlighted part of the dump, as the next screenshot shows.

Okteta showing how to edit the bytes
Okteta editing some EFI headers…
]]>
https://www.derekdemuro.com/2017/06/30/working-with-gpt-partition-tables-crc32s-and-byte-data-in-hard-drives/feed/ 0 3161
I’m leaving Google. Dropbox, One Drive, and others. https://www.derekdemuro.com/2016/12/05/im-leaving-google-dropbox-one-drive-and-others/ https://www.derekdemuro.com/2016/12/05/im-leaving-google-dropbox-one-drive-and-others/#respond Mon, 05 Dec 2016 23:48:35 +0000 https://www.derekdemuro.com/?p=1801 I left Google!

Well, great clickbate. It’s nothing against google per se. But I left many free-ish services.

image of google, let me finish my phrase
Google let me finish my phrase

List:

  1. Dropbox.
  2. OneDrive.
  3. Google Drive.
  4. Gmail.
  5. etc.

You get the drift; great services were serving a purpose but… eh, more info below.

Dropbox / One Drive / Google Drive:

All this is great if you are moving pet pictures around and you’re a home user. But usually, nothing comes for free unless you already have given resources. And lately, I realized after using owncloud; I was no longer using these services. (Owncloud replaced all of them and placed them in the garbage).

Owncloud gives me phone sync, offline access, and, better yet, cross-platform. I have it hooked up to a glusterfs backend storage with around 6 TB of usable storage in raid one style (Raid 1 on Gluster, Physical raid one on the machine).

So the odds of 100% data loss are pretty slim, given that I also perform a weekly full backup and a daily differential to another set of clustered drives.

In short, I get pretty much free 6 TB storage, which I can share, etc., and better yet… I know where it’s stored and how secure it is. Not saying these services weren’t reliable but who the heck knows where those bits were stored.

Gmail:

I’ve been using Gmail pretty much since it first saw the light; it’s been a great ride, but as lovely as it is. Besides a pretty easy UI, I wasn’t giving much more than I can already get with my servers. It was pretty much me doing them a favor than the other way.

But oh man, the ads had me tired. Then I decided to start signing my Emails with PGP, encrypting some of them with the people I talk to… but hey, no support on the web client. So it was about time. I left it altogether, dropped it.

Is it for everyone? No, is it for me? Yes!

In short, I’m planning on finding free as in beer equivalents to many of the services I use online for private solutions. In all honesty, I do them a favor by not having to store my wife’s cat pictures.

THE PAIN:

Moving forward with this idea made me realize many services depend on Email as an identifier. Unfortunately, this is horrible for many reasons; mainly, if you ever change it, you’re doomed! So, for now, it’s going to be a forward to my new address and hope for the best.

Finally… this picture will explain it all:

Thank you for all these years… till you turned evil.
]]>
https://www.derekdemuro.com/2016/12/05/im-leaving-google-dropbox-one-drive-and-others/feed/ 0 1801
Downtime at MVD 01 Location. https://www.derekdemuro.com/2015/12/14/downtime-at-mvd-01-location/ https://www.derekdemuro.com/2015/12/14/downtime-at-mvd-01-location/#respond Mon, 14 Dec 2015 23:50:12 +0000 https://www.derekdemuro.com/?p=1811 Today we suffered a Network Outage due to a power grid failure on a 500 KVa line that brings power from the hydroelectric dams in the central part of the country.

This caused a power failure on Antel equipment that left the datacenter without connectivity for ~30 minutes on leading network equipment.

Regarding the equipment:

UPS: held the power outage entirely, dropping only ~8% of its capacity, having an actual load on the redundant grid of 17%.

Router: Since Sunday, we’ve been running on our main backup router. We were leaving some services offline. This occurred due to a Raid failure on the primary router that left dropped out of the cluster.

Resolution: There’s a planned maintenance tonight through tomorrow morning, re-configuring the network to avoid future issues.

Actual Status: UPS capacity has been restored to 100%, leaving us ready for another network outage.
No maintenance was required, and no intervention was needed.

Temperature Warning: During the power outage, the central HVAC unit went down for power saving until alarms went off 20 minutes after power loss.
HVAC was restored after the alarm went off, resuming regular operation.

LasFlores Location:

Las Flores remains offline until further notice; a primary power failure remains, not affecting any services. I Will update once the situation normalizes.

]]>
https://www.derekdemuro.com/2015/12/14/downtime-at-mvd-01-location/feed/ 0 1811
Networking for Systems Engineers https://www.derekdemuro.com/2015/11/05/networking-for-systems-engineers/ https://www.derekdemuro.com/2015/11/05/networking-for-systems-engineers/#respond Thu, 05 Nov 2015 23:52:15 +0000 https://www.derekdemuro.com/?p=1821 I joined Sophilabs back in March, and since then, I’ve been trying to provide and offer my training, building my company infrastructure into making their support better.

That being said, coworkers found entertaining the backbones on how the internet works, so I decided to dedicate a particular time the company provides for this topic and choose to make a Friday talk. Unfortunately, 1 hour was quite a short time to give them as much info as possible and have them take something useful home.

Topics:

  • DNS
  • TCP/Packets
  • Wireshark
  • Networking
  • Internet Evolution
  • Spoofing

Pretty much all that was covered here, and keep in mind the basic idea behind the presentation is to offer a realistic view of how the internet works and how it was built.

To end up with a practical way to understand how this knowledge may help you get the company’s advantage. For that reason, I altered the company website… with a version of my own LOL where I chopped off heads and swapped people’s heads around, and of course, the childhood prank on changing positive messages into negative messages.

Remember: Training employees, and getting involved, care about what you learn will make the company more valuable and the employees more valuable.

Cheers!

Download “Networking for systems engineers” Networking-for-systems-engineers-Autosaved.pptx – Downloaded 462 times – 6.23 MB
]]>
https://www.derekdemuro.com/2015/11/05/networking-for-systems-engineers/feed/ 0 1821
Garage DataCenter Evolution https://www.derekdemuro.com/2015/11/05/garage-datacenter-evolution/ https://www.derekdemuro.com/2015/11/05/garage-datacenter-evolution/#respond Thu, 05 Nov 2015 22:27:50 +0000 https://www.derekdemuro.com/?p=1641 THIS IS NOT RELATED TO THE OTHER SERVERS RUNNING IN Los Angeles and Secaucus. This is running at my house. Check the network diagram so you know what I’m talking about.

Garage Data Center project. Through the years.

So, the project’s whole point was to start the skeleton of the company I built after this. Funny enough, the company is still around; we’re again hosting websites, databases, VPN’s, and we’re moving more data every year. We now want to share with you how this started and evolved.
The funny, the sad, and the truth.

We started with a negative budget; lol yeah, we started without any money. We would work here later on, but not for long. Soon it just turned into running more and more machines.

The place:

It was located behind my house, in a garage. We’d take half the space of the garage, and we’d fix it up enough to hold the servers safely. It’s worth to notice this is in Montevideo, Uruguay. Most things are crazy expensive here. And we did this without a budget. We mostly bought what we could. And manage it as we moved forward. God only knows what we would have built with a budget.

2009 The beginning.

Specs:

1x Router (I think it was a PIV HT, 3 GHz? 1 GB RAM?) [PfSense ?]

1x Server (Dual Core? 4 GB RAM? Running Ubuntu). [10.04 ?]

1x Switch 100 MB FD 8 port.

A connection was: 10 MiB Down and 2 Up, I believe. Through copper PPPoE.

Here is where we’d put the lab. Drinks, chairs that later on moved to computer chairs, some drinks (Just kidding, we were tired since we were working.) It’s also worth mentioning that this was complicated also since all we did here had to be easily removed (Remember this is my Grandmother’s house).

This is the back, where the main entrance would be, and what was extremely convenient to put the machines inside.

Haha, that old monitor. Can you believe that’s been working for the past 12 years?

Unfortunately, we don’t have pictures of how the thing shifted over the years. But this place was a warehouse, still is a sound server room, we added UPS to keep the servers going and the network going for at least 4 hours. [Keep in mind this is in a plain residential area. Adding a generator was not an option. We were looking back to our power company history. Within the past 10+ years, we’ve had no more than a 2-hour prolonged power outage. So we thought planning for double that or at least half a workday would be enough with critical servers on only.

Planning on the batteries correctly, sizing them, wiring them, modifying the UPS, buying an inverter, and trying to make it work… that whole process doesn’t have pictures, unfortunately, as of the writing of this post since I don’t have them with me. If I do find them, I’ll upload them.

But we had to measure all machines’ power consumption, figure out what’s crucial or not, figure out the wiring we’d be using to be sure nothing would catch fire. Choose between car batteries or deep drain, and choose the Air Conditioning. Bear in mind we’d have to be able to dismantle this place in hours in the event of the house being sold, or we move out.

In short:

AC 12000 BTU modified portable AC unit was chosen and modified because it was a single tube one, instead of a double which proved to be horrible performing low cooling the systems. So we reverse-engineered it and figured out a way to fix it. We’ll include pictures of this.

Another thing we wanted was climate control (Humidity and Temperature). For that, we used a Raspberry Pi 1; now we changed to Raspberry Pi 2 B.

We’ll add the pictures on how we pulled together the UPS monitoring and how we manage the AC Unit inside the workplace because it’s also worth noting that we did all this. After all, I planned to move to the United States while leaving all this equipment partially unattended. And if the moment came, I’d have to log into any of these systems, or there was some overheating problem. I’d have to step in remotely to fix it.
Don’t worry, my brother and parents are ten steps away, in case of a REAL emergency. Still, I believe it’s somewhat refreshing to share this experience with me. Soon I’ll be departing hopefully to the US, leaving all this running. Let’s see how well that experiment holds lol.

Yes, evolution in progress. Lol. That mess needs some work, but unfortunately. We don’t have that much time. I’ll try to take pictures after I get to it.

Okay, those are general pictures of some of the servers running here. Let’s get more into detail.

Common to all servers:

  1. Dual Gigabit Network cards.
  2. Servers run RAID 1 for the OS.
  3. Servers run Debian 8 with modified Kernel and custom-built KVM Image.
  4. Servers have 10+ GB RAM.
  5. System images are stored in a different RAID 1.
  6. We store data on top of GlusterFS, and some of it on USB Drives! :O.

Done, that being said we have:

  1. 1x I7 Server
  2. 1x I3 Server
  3. 2x Core2Duo with VT-d VT-x
  4. 2x Core2Duo without VT-d

Storage:

root@mvd02:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md0               83G   13G   67G  16% /
udev                   10M     0   10M   0% /dev
tmpfs                 3.2G   90M  3.1G   3% /run
tmpfs                 7.9G     0  7.9G   0% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                 7.9G     0  7.9G   0% /sys/fs/cgroup
/dev/md2              825G  631G  153G  81% /home
/dev/sdd1             2.7T  1.1T  1.6T  41% /mnt/ex3TBUSB
/dev/sde2             917G  742G  129G  86% /data/glusterfs/backup/brick1
/dev/sdb1             1.8T   90G  1.7T   6% /data/glusterfs/takelan/brick1
tmpfs                 1.6G  4.0K  1.6G   1% /run/user/1000
tmpfs                 1.6G     0  1.6G   0% /run/user/0
127.0.0.1:/takelan    1.8T   90G  1.7T   6% /mnt/glusterfs/takelan
127.0.0.1:/backup     917G  742G  129G  86% /mnt/glusterfs/backup
10.1.0.3:/no-replica  1.4T  301G 1005G  24% /mnt/glusterfs/no-replica
root@mvd01:~# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/md0                83G   18G   61G  22% /
udev                    10M     0   10M   0% /dev
tmpfs                  4.7G   74M  4.6G   2% /run
tmpfs                   12G   68K   12G   1% /dev/shm
tmpfs                  5.0M  4.0K  5.0M   1% /run/lock
tmpfs                   12G     0   12G   0% /sys/fs/cgroup
/dev/md2               825G  576G  208G  74% /home
/dev/sdd1              1.8T   90G  1.7T   6% /data/glusterfs/takelan/brick1
/dev/sdf1              917G  742G  129G  86% /data/glusterfs/backup/brick1
/dev/sde1              459G  190G  246G  44% /data/glusterfs/no-replica/brick2
/dev/sdc2              917G  112G  759G  13% /data/glusterfs/no-replica/brick1
/dev/sdc1              917G  742G  129G  86% /data/glusterfs/backup/brick2
127.0.0.1:/takelan     1.8T   90G  1.7T   6% /mnt/glusterfs/takelan
127.0.0.1:/backup      917G  742G  129G  86% /mnt/glusterfs/backup
tmpfs                  2.4G  8.0K  2.4G   1% /run/user/1000
tmpfs                  2.4G     0  2.4G   0% /run/user/0
127.0.0.1:/no-replica  1.4T  301G 1005G  24% /mnt/glusterfs/no-replica
root@mvd03:~# df -h
Filesystem              Size  Used Avail Use% Mounted on
/dev/sda2               136G   56G   67G  46% /
udev                     10M     0   10M   0% /dev
tmpfs                   1.2G  114M 1019M  11% /run
tmpfs                   2.8G     0  2.8G   0% /dev/shm
tmpfs                   5.0M     0  5.0M   0% /run/lock
tmpfs                   2.8G     0  2.8G   0% /sys/fs/cgroup
/dev/sdb1               917G  246G  626G  29% /mnt/kvm
10.1.0.2:/mnt/ex3TBUSB  2.7T  1.1T  1.6T  41% /mnt/ex3TBUSB
[2.2.6-RELEASE][root@router.uy.takelan.com]/root: df -h
Filesystem                      Size    Used   Avail Capacity  Mounted on
/dev/mirror/pfSenseMirrors1a    443G     12G    396G     3%    /
devfs                           1.0K    1.0K      0B   100%    /dev
/dev/md0                        3.4M    196K    2.9M     6%    /var/run
devfs                           1.0K    1.0K      0B   100%    /var/dhcpd/dev
procfs                          4.0K    4.0K      0B   100%    /proc

Yes, all systems run on RAID 5 | RAID 1, because I’m paranoid and I’ve had drives dying on me at the worst moment.

That being said, it’s not a crazy setup, but it keeps the ball rolling. I get to backup everything, from customers to family. And I get to have tons of storage for myself. You can see above what’s on gluster and what isn’t. All on gluster is also running on a mirror. That’s because, in the event of a server failure, I have the VM’s cloned and ready to spin up. I chose the docker approach since many of my services run on docker to keep the VM’s as cases, and keep the data straight on GlusterFS and access it through a virtual network adapter in the VM’s.

Security:

  1. Cameras on roofs of the houses.
  2. A camera on the perimeter.
  3. Camera pointing to the south door.
  4. Camera pointing to the east door.
  5. Camera pointing to every house’s front perimeter.
  6. The camera is pointing to the servers.
  7. The dual firewall on cascade PfSense with CARP.
  8. VPN Access is strictly limited.
  9. Suricata in place to have a closer inspection of the network.

Network:

Main backbone is 120 MiB down, 13 MiB Up. 

This is a close approximation of the data moved by the servers to the outside world.

Why tower casings?

Simple, they are everywhere, and they are cheap. And if there’s one thing I didn’t need is the density or excessive noise. Plus, having them so apart. It allows them to keep the AC low or off except for summertime.

Uninterrumpible Power Supply?:

Yes, from the moment we took this picture, we added two more batteries. They are 90 Amp/h totaling: 360 Amp we have in reserve.

Yes, from the moment we took this picture, we added two more batteries. They are 90 Amp/h totaling: 360 Amp we have in reserve.

So far, I have not tested how long it’ll keep everything except for the AC running. But we added a big turbine inside to keep the components fresh.

It may seem as it’s enormous, but it’s not. We don’t pull more than 600W out of it, EVER. Lol. The problem we have is keeping the servers running for a long.

It’s been 2 years since these two were up… and we haven’t had a single power outage. We also should note that we have triphasic power to this place, and we have a corporate contract with the power company.

You said, temperature control?

This is the AC; you can see on the Air inlet the DHT11 sensor keeping track of the air temperature and humidity before going through the cooling stage. Yes, I know it’s dirty; it’s a lot of duct tape. But hey, if I ever take this down. I want to be able to salvage this AC Unit.

That being said, it works like a charm; the reason for the box below, covering the other inlet, is because this was a portable AC unit. Since it only has one hose initially air from inside, the server room would go outside after cooling the compressor, so instead, I added a second hose with a grid outside, so it’ll pull outside air, cool the compressor, and pull it out. It is not forcing cold air from the inside out. I can make a more extended explanation later and how we noticed this problem.

But it’s pretty straight forward to be honest how we noticed it.

So what have the sensors attached to? How do I remotely control the power of everything inside here without stepping in it?. Simple. Saintsmart 8-Relay board, DHT11 sensor, IR Sensor, big ass contractors for AC, and the primary power grid.

Yes, that box, with a fan, with a CF, SD, red light, and blue thingy, is the box containing the Raspberry Pi, the relay board, etc.

The fan had to be glued and duct-taped along with the cable since it’s high RPM and vibration. Unfortunately, we had to use silicone plus ducktape since the top part of the box is tight.

The box has four holes, one front for the USB cable, power cable. 1 Back for power. 2 Sides to connect to the relay board.

The raspberry is running a PHP script that has a summary of all services as NUT for the UPS, Relay board status, and Air Conditioning. Along with a JSON webserver pulled by a JAVA application.

In case you’re wondering, that’s the DHT11 sensor I told you about. 0-50C and 20 – 95% Humidity. Perfect for my scope, since 95%+ of Relative Humidity means the data center is submerged underwater, and 50 Celcius + means the data center is probably on fire.

I can explain how I connected all that later on; I forgot to show a picture of the IR transmitter taped to the AC. And there, you can see the AC with the sensor and the receiver while I was testing if the GPIO pins would provide enough power to drive the IR transmitter.

What about the networking gear? Why that white wire in the way? No idea. Lol, but probably dust?. Gray cable is CAT6.

But we also have IP Phones… how? Digium card, shown below.

There you can see my development Raspberry Pi 2, and the one that’s going into service. The parts you see with long black tape is because there’s a metal wire inside to shape. 

The cables once I place them. Who cares!. No one cares!.

DLink KVM Switch, Lantronix Spider, connected to it. With Aux and Main power.

The Lantronix Spider is an excellent device along with the Dlink Switch. It allows me to work on the systems remotely without worrying about getting locked up on boot or having to change the BIOS. It’s a great alternative also to IPIMI.

Sorry, unfortunately, I cannot show any longer. Things have changed since then, and it doesn’t seem like the best idea to be publishing pictures on how it looks right this second, just for security. No worries. As time goes by and I change it more, you’ll get updates.
Also, stick around for posts on how I manage all the systems. How I did the whole Raspberry Pi project and all I’m capable of doing without stepping in that room.

]]>
https://www.derekdemuro.com/2015/11/05/garage-datacenter-evolution/feed/ 0 1641
VirtualBox 5.0 https://www.derekdemuro.com/2015/09/27/the-constant-fight-of-time-vs-quality/ https://www.derekdemuro.com/2015/09/27/the-constant-fight-of-time-vs-quality/#respond Sun, 27 Sep 2015 23:53:44 +0000 https://www.derekdemuro.com/?p=1831 “VirtualBox is a powerful x86 and AMD64/Intel64 virtualization product for enterprise as well as home use. Not only is VirtualBox an extremely feature-rich, high-performance product for enterprise customers, it is also the only professional solution that is freely available as Open Source Software under the terms of the GNU General Public License (GPL) version 2. See “About VirtualBox” for an introduction.” 

Straight from VirtualBox website.

opensuse 13 on windows 7

As some may know, part of my duties is working with a custom distro and performing many “low level” changes to the operating system to make things work… let’s say, not as they are supposed to.

VirtualBox has been my virtualization platform ever since Linux as my primary Operating System. I say primary because, for testing purposes, Windows is still a big part of my life, Visual Studio, and so on.

VirtualBox continues to step up its game, at times a step behind VMWare, but hey, they have $$ some fair advantage.

As I love OpenSuse here you have VirtualBox in Windows running OpenSuse.

But some let’s list the new awesome features:

  1. Paravirtualization Support for Windows and Linux Guests: Significantly improves guest OS performance by leveraging built-in virtualization support on operating systems such as Oracle Linux 7 and Microsoft Windows 7 and newer.
  2. Improved CPU Utilization: Exposes a broader set of CPU instructions to the guest OS, enabling applications to make use of the latest hardware instruction sets for maximum performance.
  3. Support of USB 3.0 Devices: Guest operating systems can directly recognize USB 3.0 devices and operate at full 3.0 speeds. The guest OS can be configured to support USB 1.1, 2.0, and 3.0.
  4. Bi-Directional Drag and Drop Support for Windows: On all host platforms, Windows, Linux, and Oracle Solaris guests now support “drag and drop” of content between the host and the guest. The drag and drop feature transparently allows the copying or opening of files, directories, and more.
  5. Disk Image Encryption: Data can be encrypted on virtual hard disk images transparently during runtime, using the industry-standard AES algorithm with up to 256-bit data encryption keys (DEK). This helps ensure data is secure and encrypted at all times, whether the VM is sitting unused on a developer’s machine or server, or actively in use.

As you know, in Takelan, we use VirtualBox, a LOT, and we use it even for critical mission applications as PfSense running some of our perimeter Firewalls. “www.pfsense.org/” One thing we can’t complain about is Performance.

VirtualBox in this new delivery has made a HUGE step forward in virtualizing technology and just for the fun of it. Here I am, typing this post straight from my Windows VM in OpenSuse.

Why don’t I do it the other way around? and Virtualize OpenSuse?

OpenSUSE 13.1

Because of HTOP! nah, seriously, because Linux just makes my life easier.

And if you don’t get to see the coolness in there.. then, you’re not a real SysAdmin lol.

And with that… all I can say is, Happy Sysadmin’s day everyone.

]]>
https://www.derekdemuro.com/2015/09/27/the-constant-fight-of-time-vs-quality/feed/ 0 1831
Jarvis – DLA (Digital Life Assistant) https://www.derekdemuro.com/2014/08/01/jarvis-dla-digital-life-assistant/ https://www.derekdemuro.com/2014/08/01/jarvis-dla-digital-life-assistant/#respond Fri, 01 Aug 2014 05:52:22 +0000 https://www.derekdemuro.com/?p=3006 The idea behind MultiAll

Abstract:

The problem of managing multiple servers that do basically similar stuff and are used to host in house applications. Avoiding configuration time hogs, and automation of server configuration allowing fast deployments with no scaling limit.

The problem:

The problem with MultiAll is the fact of having all the servers on sync when they are in different “Farms or Grids” and the fact that one server can provide services to more than one porpuse. This is why we invent the “Pool system” where all the servers respond or act for a defined pool.

If a server responds to multiple pools the aggregation of every pool is the server.

Understanding how we can make it work:

Under our research its possible to merge more than one server to work for a common goal, doing some modifications to the normal database driven apps to have a cache table, and when that server is unable to communicate to others under the same pool they save the changes they have to make to their brother servers and later on when they are back online deploy the changes.

On the file system side, every server is responsible of keeping others from the same pool up to date, bi partitioning the problem as every server is responsible for others and their self.  We’re doing similar to how Rsync works, and we would only push differences between servers under a secure connection IE: (VPN).

By this moment we would have a DB Layer abstraction for multiple servers responding to a DB and file system.

Next is the Domain Name Server problem, that we don’t want clients reaching servers that are offline due to maintenance or problems, for this we’re developing on BIND 9 an abstraction layer that every server in a pool must be “Network Aware” of others of his kind, and if hes unable to reach others, then he must change the DNS registry to reflect the changes.

How MultiAll solves the problem:

Basically MultiAll will work as a service provider inside the server, as a global abstraction layer, and once the application is provided with a bridge or connection layer to it, it would be able to take advantage of the system. MultiAll has as key features:

  1. Topology-Aware Neighboring System (TNS): provides a topology discovery service for the servers (or “nodes”), which may eventually “recalculate” the topology in case of unexpected downtime (such as network, power or hardware failures, among others) or in cases of planned downtime (such as server maintenance, hardware upgrades or migrations). The different synchronization agents of the “Multi All” depends on information learned through TNS, therefore it is considered a critical component of the “Multi All” system.
  2. System Baseline Monitor (SBM, formerly known as “checker”): provides a 24/7 server health monitoring, locally on each server (or “node”) using “Baseline Rules”. These rules are defined on a “per pool” or “per server” basis, allowing the configuration to be as granular as needed, making exceptions if the need arises. The health status is published via TNS using standardized codes known as “SBM Statuses”. It is also considered a critical component of the “Multi All” system.
  3. File System Synchronization Agent (FS-SA): lets you define structures inside your filesystem to keep synchronized across a pool of servers. FS-SA, used in combination with the rest of the “Multi All’s” Synchronization Agents (SA’s) provides you with high availability, data redundancy and server load balancing on your pool of servers.
  4. Software and Libraries Synchronization Agent (SL-SA): especially useful for large-scale unattended deployments, the SL-SA keeps all your software packages, services and libraries consistent across the pool, raising awareness to the system administrators when possible conflicts, incompatibilities or other issues arise.
  5. Database Synchronization Agent (DB-SA): keeps the different database servers of the pool synchronized. Depending on the needs of the underlying applications, the DB-SA may either work 24/7 to keep the DBs in perfect sync, or you could define your own database synchronization policies.
  6. Domain Name System Synchronization Agent (DNS-SA): keeps the DNS zones up to date with the pool’s topology either via a pull-push synchronization mechanism (handled by the FS-SA) or by rebuilding the DNS zone according to the topology discovered by the TNS.

The project has gotten a bit more ambitious… so here is how its changed!

Okay so some research has been going on, and the project has been growing quite a bit. Among the changes that have been happening around, it has gotten bigger, now all the things stated before, are part of a much larger system now.
TakeConnector will now be the daemon that will keep our infrastructure.

]]>
https://www.derekdemuro.com/2014/08/01/jarvis-dla-digital-life-assistant/feed/ 0 3006
Simple Bash script to monitor temperature changes in a PC. https://www.derekdemuro.com/2013/12/09/simple-bash-script-to-monitor-temperature-changes-in-a-pc-2/ https://www.derekdemuro.com/2013/12/09/simple-bash-script-to-monitor-temperature-changes-in-a-pc-2/#respond Mon, 09 Dec 2013 23:21:00 +0000 https://www.derekdemuro.com/?p=5351 Temperature monitoring script. -Find out what it mainly does.-

Overview: After having a high temperature in one of our servers and not being able to notice it as a ‘heat wave’ came in… I thought about adding to our monitoring stack a simple script to monitor the temperature and notify us in case something goes wrong.

Basically, the script will run SENSORS and parse the output from it to let us know if something goes wrong, and leave a simple logline to check the average temperatures.

So packages you’ll need : sensors.

To install install sensors and be able to run this script:

  1. Install the lm-sensors package.
  2. Run sudo sensors-detect and choose YES to all YES/no questions.
  3. At the end of sensors-detect, a list of modules that needs to be loaded will displayed. Type “yes” to have sensors-detect insert those modules into /etc/modules, or edit /etc/modules yourself.
  4. Next, run sudo service module-init-tools restart This will read the changes you made to /etc/modules in step 3, and insert the new modules into the kernel.
  5. Copy this script to ~ (root folder), add permissions to execute chmod u+x (scriptname).sh
  6. crontab -e add the line “@reboot /root/(scriptname).sh
#!/bin/bash
######################################################################################
# Derek Demuro, this script is given as is, CopyLEFT                                 #
######################################################################################
######################################################################################
# README                              LEAME                                          #
######################################################################################
# This script will run and check the temperature, in case of being high, MAIL!
#
############################To Set Up#################################################
#
# To set this script up, you'll need to add it to a cronjob to run on boot
# Most linux distros will allow a param @reboot /(path)/servermon.sh
# So... crontab -e
# At the bottom add: @reboot /(path)/servermon.sh
# REMEMBER TO ADD EXECUTABLE BIT TO THE FILE (777 Permissions)
######################################################################################
# SCRIPT CONFIGURATION                                                               #
######################################################################################
### Where should the log be saved?
readonly LOGNAME='servertemp.log'
### Who should we mail on error
readonly MAILTO='<a href="mailto:mail@derekdemuro.me" class="mailto">mail@derekdemuro.me<span class="mailto"><span class="element-invisible"> (link sends e-mail)</span></span></a>'
### How much time between check's?
readonly SLEEPTIME=30
###Alert if temp Above
readonly MAXTEMP=75
###ServName
readonly SERVNAME='UYMF1DEB'
##How long to sleep after message sent
readonly SLEEPERROR=216000
### How many records to keep
readonly CLEARLOGTMS=1000
######################################################################################
#################################FUNCTIONS START################################
 
###Function to clear the log
function clearLog() {
  echo 'Log cleared' > $LOGNAME
  echo 'Script will run ' $1 ' times then will clear itself'>> $LOGNAME
  return 0
}
 
#################################FUNCTIONS FINISH################################
#################################MAIN SCRIPT FUNC################################
while [ TRUE ]; do
    #add 1 to times
    times=`expr $times + 1`
    ##CLEAR THE LOG
    if [ $times -eq $CLEARLOGTMS ]; then
        clearlog
        $times=0
    fi
    ##Run script for every line in serverList
    currentTemp=`sensors|grep Core|awk '{print $3}'|cut -b2,3,3|tail -1`
    if [ $currentTemp -gt  $MAXTEMP ]; then
        echo -e "\e[00;31m`date +'%Y-%m-%d %H:%M:%S'` ALERT: Current temperature: $currentTemp, at server: $SERVNAME \e[39m"
        mail -s "ALERT: Temperature above umbral" $MAILTO -a "Reply-To: " <<< "`date +'%Y-%m-%d %H:%M:%S'` ALERT: Current temperature: $currentTemp, at server: $SERVNAME"
        sleep $SLEEPERROR
    else
        echo -e "\033[38;5;148m `date +'%Y-%m-%d %H:%M:%S'` All good: $currentTemp at server: $SERVNAME \033[39m"
    fi
  ##We sleep till new run
  sleep $SLEEPTIME
done
]]>
https://www.derekdemuro.com/2013/12/09/simple-bash-script-to-monitor-temperature-changes-in-a-pc-2/feed/ 0 5351
Simple Bash script to monitor temperature changes in a PC. https://www.derekdemuro.com/2013/08/30/simple-bash-script-to-monitor-temperature-changes-in-a-pc/ https://www.derekdemuro.com/2013/08/30/simple-bash-script-to-monitor-temperature-changes-in-a-pc/#respond Fri, 30 Aug 2013 03:59:51 +0000 https://www.derekdemuro.com/?p=1931 #!/bin/bash ####################################################################### # Read dnsServ.lst # # Query for all records in domain.lst for each dnsServ.lst # # Used for domain caching at ISPS # ####################################################################### ##Server list location readonly servList='dnsServ.lst' ##Domains to be checked readonly domainQ='domain.lst' ##Where to log the output readonly outputFile='dnsCheck.log' echo "" > $outputFile cat $servList | while read servL do cat $domainQ | while read dom do echo "Quering $servL for domain: $dom" digOutput=`dig ANY +noadditional +noquestion +nocomments +nocmd +nostats $dom. @$servL` echo "$digOutput" echo "$digOutput" >> $outputFile done done

So there you have the script, now if you want a list of DNS’s to check your domains against…

8.8.8.8
8.8.4.4
156.154.70.1
156.154.71.1
208.67.222.222
208.67.220.220
198.153.192.1
198.153.194.1
4.2.2.1
4.2.2.2
4.2.2.3
4.2.2.4
4.2.2.5
4.2.2.6
67.138.54.100
207.225.209.66
85.88.19.10
85.88.19.11
87.118.100.175
94.75.228.28
62.141.58.13
85.25.251.254
85.214.73.63
212.82.225.7
212.82.226.212
213.73.91.35
58.6.115.42
58.6.115.43
119.31.230.42
200.252.98.162
217.79.186.148
82.229.244.191
216.87.84.211
66.244.95.20
204.152.184.76
194.150.168.168
80.237.196.2
194.95.202.198
88.198.130.211
78.46.89.147
129.206.100.126
79.99.234.56
208.67.220.220
208.67.222.222
156.154.70.22
156.154.71.22
85.25.149.144
87.106.37.196
8.8.8.8
8.8.4.4
88.198.24.111
58.6.115.42
202.83.95.227
119.31.230.42
217.79.186.148
178.63.26.173
178.63.26.174
27.110.120.30
89.16.173.11
210.80.60.1
210.80.60.2
199.166.24.253
199.166.27.253
199.166.28.10
199.166.29.3
199.166.31.3
195.117.6.25
204.57.55.100
4.2.2.1
4.2.2.2
4.2.2.3
4.2.2.4
4.2.2.5
4.2.2.6
64.129.67.101
64.129.67.102
64.129.67.103
151.197.0.38
151.197.0.39
151.202.0.84
151.202.0.85
151.202.0.85
151.203.0.84
151.203.0.85
199.45.32.37
199.45.32.38
199.45.32.40
199.45.32.43
192.76.85.133
206.124.64.1
67.138.54.100
220.233.167.31
199.166.31.3
66.93.87.2
216.231.41.2
216.254.95.2
64.81.45.2
64.81.111.2
64.81.127.2
64.81.79.2
64.81.159.2
66.92.64.2
66.92.224.2
66.92.159.2
64.81.79.2
64.81.159.2
64.81.127.2
64.81.45.2
216.27.175.2
66.92.159.2
66.93.87.2
199.2.252.10
204.97.212.10
204.117.214.10
64.102.255.44
128.107.241.185

]]>
https://www.derekdemuro.com/2013/08/30/simple-bash-script-to-monitor-temperature-changes-in-a-pc/feed/ 0 1931
TakeLAN Connector- Administration to the next level https://www.derekdemuro.com/2013/04/14/takelan-connector-administration-to-the-next-level/ https://www.derekdemuro.com/2013/04/14/takelan-connector-administration-to-the-next-level/#respond Sun, 14 Apr 2013 06:00:19 +0000 https://www.derekdemuro.com/?p=3061 The idea behind MultiAll.

Abstract:

The problem of managing multiple servers that do similar stuff and are used to host in house applications. Avoiding configuration time hogs and automation of server configuration, allowing fast deployments with no scaling limit.

The problem:

The problem with MultiAll is having all the servers on sync when they are indifferent “Farms or Grids” and the fact that one server can provide services to more than one purpose. This is why we invent the “Pool system” where all the servers respond or act for a defined pool.

If a server responds to multiple pools the aggregation of every pool is the server.

Understanding how we can make it work:

It’s possible to merge more than one server to work for a common goal under our research, making some modifications to the typical database-driven apps to have a cache table. When that server is unable to communicate to others under the same pool, they save the changes they have to make to their brother servers, and later on, when they are back online, deploy the changes.

On the file system side, every server is responsible for keeping others from the same pool up to date, bi partitioning the problem as every server is responsible for others and themselves.  We’re doing similar to how Rsync works, and we would only push differences between servers under a secure connection IE: (VPN).

By this moment we would have a DB Layer abstraction for multiple servers responding to a DB and file system.

Next is the Domain Name Server problem. We don’t want clients to reach servers offline due to maintenance or issues; for this, we’re developing on BIND 9 an abstraction layer that every server in a pool must be “Network-Aware” of others of his kind. If he’s unable to reach others, he must change the DNS registry to reflect the changes.

How MultiAll solves the problem:

MultiAll will work as a service provider inside the server, as a global abstraction layer. Once the application is provided with a bridge or connection layer to it, it would take advantage of the system. MultiAll has as key features:

  1. Topology-Aware Neighboring System (TNS): provides a topology discovery service for the servers (or “nodes”), which may eventually “recalculate” the topology in case of unexpected downtime (such as network, power or hardware failures, among others) or instances of planned downtime (such as server maintenance, hardware upgrades or migrations). The different synchronization agents of the “Multi All” depend on information learned through TNS; therefore, it is considered a critical component of the “Multi All” system.
  2. System Baseline Monitor (SBM, formerly known as “checker”): provides a 24/7 server health monitoring, locally on each server (or “node”) using “Baseline Rules.” These rules are defined on a “per pool” or “per server” basis, allowing the configuration to be as granular as needed, making exceptions if the need arises. The health status is published via TNS using standardized codes known as “SBM Statuses.” It is also considered a critical component of the “Multi All” system.
  3. File System Synchronization Agent (FS-SA): lets you define structures inside your filesystem to keep synchronized across a pool of servers. FS-SA, used in combination with the rest of the “Multi All’s” Synchronization Agents (SA’s), provides you with high availability, data redundancy, and server load balancing on your pool of servers.
  4. Software and Libraries Synchronization Agent (SL-SA): especially useful for large-scale unattended deployments, the SL-SA keeps all your software packages, services, and libraries consistent across the pool, raising awareness to the system administrators when possible conflicts, incompatibilities or other issues arise.
  5. Database Synchronization Agent (DB-SA): keeps the different database servers of the pool synchronized. Depending on the needs of the underlying applications, the DB-SA may keep the DBs in perfect sync, or you could define your database synchronization policies.
  6. Domain Name System Synchronization Agent (DNS-SA): keeps the DNS zones up to date with the pool’s topology either via a pull-push synchronization mechanism (handled by the FS-SA) or by rebuilding the DNS zone according to the topology discovered by the TNS.

The project has gotten a bit more ambitious… so here is how its changed!

Okay, so some research has been going on, and the project has been growing quite a bit. Among the changes that have been happening around, it has gotten bigger, now all the things stated before, are part of a much larger system.
TakeConnector will now be the daemon that will keep our infrastructure.

]]>
https://www.derekdemuro.com/2013/04/14/takelan-connector-administration-to-the-next-level/feed/ 0 3061