Thursday, 26 November 2015

New Dell computers and online security

A certificate security issue has been identified on recently-sold Dell computers. Computers affected can be tricked into believing that fake sites are legitimate. To test if your computer is vulnerable:

Technical details of the issue can be found at:

Friday, 20 November 2015

Ransomware - Why Antivirus Software is not Enough

This post was brought about by a recent telephone enquiry regarding preventing ransomware infections.  I thought it would be sensible to write a non-technical discussion of ransomware.  I pondered over what to include for some time as, even a non-technical article, was in danger of becoming very long.  In the end deciding on the Frequently Asked Question format below.

What is ransomware?

Ransomware is a form of malware, that will attempt to encrypt data files.  Typically, these files will be spreadsheets, word processing documents, pictures and so on.  In short, a large amount of any organisation’s or person’s valuable data.  After the encyption process has been executed the user will be offered the access to decrypt key in return for paying an amount of money.  Usually, this will be in the form of an untraceable BitCoin transaction.

Why can’t you just “unencrypt” our files?

The encryption used is very “strong”.  Without the necessary “key” it is not possible to reverse the encryption process.  So called “brute force” (in simple terms, guessing the key) might get there, but in several hundred years.  In some cases law enforcement have caught up with the bad guys and published the keys recovered, which can then be used to decrypt infected files.(1)

Why do the bad guys do this?

It is incredibly lucrative for them.  This year alone, it is believed that the group behind the “Cryptowall” has made US$325 million.(2)

How would it get on to our computers?

Ransomware infection can have a number of sources.  Most commonly, an e-mail with an attachment or a link to an infected website.  Some of these are bulk e-mails that try to catch the unwary, such as speeding fine notifications(3).  We do, however, see attempts at infection that are clearly targeted to the organisation that receives the e-mail.  For example, property lawyers receiving e-mails with subjects like “Property listing in suburb A”.  Unfortunately, an infected website may not an obscure, not suitable for work site.  The bad guys are very well resourced (see the figure of US$325 million above) and will go to lengths to compromise seemingly safe, well known websites.(4)

But, we have anti-virus software, won’t that protect us?

It is possible that your anti-virus software will detect the ransomware, unfortunately it will almost certainly be around 48 hours after all your files have been encrypted.  The well resourced bad guys go to great lengths to avoid their malware being detected by current virus checkers.  This is not to say that we can all stop running anti-virus software; it is still required.  It is just not enough to prevent ransomware infection.

So, how do we protect our system?

You absolutely must have good backups.  This won’t prevent infection but it will allow you to recover if the worst happens.  The primary defence, we are implementing, against ransomware infections is “Software Restriction Policies”.

Software Restriction Policies - how do they work?

Very simply:  SRP will only allow programs to run from certain locations.  If the end user cannot save a file in any of those locations because of configured security they cannot run an inadvertently downloaded ransomware tool.  That is a very simple explanation of a complex configuration, but does cover essentially what SRP does.

Is that enough?

For now yes.  Unfortunately, we are already seeing malware loaded e-mails that appear to be trying to circumvent Software Restrictions.  We will post updates as the threat evolves.


Wednesday, 11 November 2015

Nagios - a Non-Technical Explanation

Tim's article describing how he is using Nagios to monitor printer toner levels ( set me thinking that the possible reactions would be like this.  A tech already using Nagios might think "Just what I need to avoid running out of toner at my branch offices!".  Another tech might say "they're still using Nagios!"  Everyone else (and probably some techs) would probably think "What is Nagios?"

I thought I would write an article aimed at the latter group.  I cannot go past stealing Wikipedia's summary of what Nagios is:
Nagios /ˈnɑːɡiːoʊs/, an open-source computer-software application, monitors systems, networks and infrastructure. Nagios offers monitoring and alerting services for servers, switches, applications and services. It alerts users when things go wrong and alerts them a second time when the problem has been resolved

We use Nagios in a couple of ways.  Firstly, we run a WKC Nagios server that monitors key systems for clients.  Primarily, we know about problems and outages very quickly.  Certainly, there have been several occassions where I have called a client to tell them an Internet link has gone down before they had noticed themselves.  Also, Nagios is useful to gather trends, see Tim's graph of toner levels.  This might also be server storage utilisation, for example.

Secondly, for larger clients, with multiple branch offices we may install an in-house Nagios system.  This would monitor and record considerably more metrics for that particular client's network than our own Nagios system would.

The figure below shows a screenshot of our own Nagios system.

I picked a screenshot that shows a day when we had to check a couple of USB backups.  (The ones shown "snapshot" during the day using ShadowProtect to Network Attached Storage, which is then synchronised to USB disks that are taken off-site.  Perhaps, the backup regimes we put in place to fit a particular client will be the subject of another article in the future.)

Nagios is quite venerable.  We started using it in 2005.  Geoff did most of the work bringing monitoring of clients' systems online with our Nagios implementation, when he joined us in 2006.  We have considered Icinga and Zabbix as alternatives but Nagios is doing a great job for now.  If anyone has migrated from Nagios to another system and seen benefits please let us know in the comments.

Tuesday, 27 October 2015

Nagios Script to Monitor HP/Kyocera Printer Toner Levels via SNMP

We look after a client that has a large number of printers spread over multiple sites.  Here's a quick and dirty script I created to monitor their toner levels using Nagios and SNMP:

It works surprisingly well as a reminder to check they've got supplies on hand, and when historical performance data is gathered (we use pnp4nagios) you get useful trends to estimate how often replacement toner cartridges are required:

Adding it as a command is fairly straightforward:

define command {
    command_name check_toner
    command_line /usr/local/lib/nagios/plugins/check_toner -H '$HOSTADDRESS$'

You can then use check_toner in any service definition but I recommend creating a hostgroup for printers and defining the check for the whole hostgroup.

Preventing Ransomware and Other Malware with Software Restriction Policy


Please be certain you know the impact of any Group Policies you implement.  Releasing a misconfigured Software Restriction Policy on a production system can and will stop key applications working.  There is a famous example here:
Please check, before rolling out any Group Policy Object, that Active Directory is healthy, that functions such as replication are working correctly.


The steps below summarise how to configure Software Restriction Policies to allow a whitelist of applications, or folders where it is acceptable for applications to run, in an Active Directory domain environment.  For simplicity’s sake, it is assumed that the end user is not a local administrator on their own work PC.  This is an important point.  In order to safely whitelist “C:\Program Files\”, for example, it is necessary that the end user cannot write to that directory.  This procedure will not work and should not be followed in an environment where users have local administrator access.

To implement SRP

Run Group Policy Management on a computer that will allow access to the AD domain’s Group Policy Objects.  The example below is from an SBS Server.

From the Group Policy Objects container, select Action Menu then New.  Create two new Group Policy Objects named “SRPs - enable” and “SRPs - remove”.  Note:  these already shown in the above screenshot.  The reason for the second GPO is to allow for a particular PC to be removed from SRP restrictions quickly if required.

At this point, pause in the Group Policy Management tool and create two Active Directory Security Groups in an OU accessible to the computers that will have the SRP implemented.  These can be called “SRPs - enable” and “SRPs - remove” again (the names of the groups and the GPOs are arbitrary. If a naming convention exists, it should be used).  Eventually, PCs will be brought in to the SRP by making them members of the “SRP - enable” AD group, but for the moment to not add any members to either of these.

Returning to Group Policy Management, set the “Filtering” on the Group Policy Objects that have been created so that only PCs that are members of the AD groups are impacted by SRP.  Under “Security Filtering” click the Add button.  For the “SRPs - enable” GPO add the “SRPs - enable” AD group here and, similarly add the “SRPs - remove” group to the “SRPs - remove” GPO.  See below:

Still in Group Policy Management, right click on the “SRPs - enable” and select Edit.  Expand Computer Configuration - Policies - Windows Settings - Software Restriction Policies.
Under “Security Levels” right click “Disallowed”.  Go to Properties in the menu and click on “Set as Default” (This is already set below, which is why it is greyed out.)

Now go to “Enforcement” and set as below.  Note that there are no certificate based exceptions in this implementation.

Continuing, go to “Designated File Types” and remove “LNK” from the list.  This allows the use of Desktop shortcuts, for example.

At this point it is necessary to add the exceptions - ie the locations where programs will be allowed to run.  Go to “Additional Rules”.  The screen will look something like this (some names have been removed for privacy):

What is entered here will depend on each environment.  Usually as a minimum it is necessary to add:
C:\Program Files\
C:\Program Files (x86)
\\SERVERNAMEDC1\netlogon  (in a domain environment)
\\SERVERNAMEDC2\netlogon  (if there are multiple domain controllers, list each netlogon share)

If there are Windows 8 clients add:
C:\Program Files\WindowsApps

If network paths are required UNC notation must be used - ie \\server\share.  Variables such as %appdata% can be used.

The "SRPs - enable" GPO is now configured.  To bring PCs under the SRP umbrella all that is now required is to make the PC a member of the "SRPs - enable" AD group.  PCs should be added in a controlled manner to allow for testing.  There will almost certainly be a requirement for the Additional Rules to evolve in a large or diverse environment.

To configure the "SRPs - remove" GPO, right click on it and select Edit.  Go to “Security Levels” and change the default to “Unrestricted”.

It will now be possible to apply/remove the SRP based on membership of the AD groups, remembering that it will probably be necessary to run “gpupdate /force” on the client after making changes.

Thursday, 22 October 2015

NBN Users - will your phone work when the power is off?

An "old fashioned" telephone line will work during a power outage. If you have a handset that does not need power (ie an old fashioned one) you will be able would be able to make a call when the power is off. With an NBN service this is not the case. NBN installations have included battery backups so that phones will work during power cuts. But, it seems these batteries were not of the highest quality...

Tuesday, 20 October 2015

If you ever wondered what those pesky license agreements actually say...

Thanks to Rob Shecter here is a summary.

Combating ISP Bill Shock

When starting a new data service, one of the common issues is bill shock. Excess usage can often lead to a bill in the hundreds or even thousands of dollars, and by the time you know this is happening, it's too late. Arguing the point with your provider can often be difficult - when all you have is your word versus their potentially incorrect data.

To try and improve our odds in this situation, we keep tabs on the data usage ourselves. With a Raspberry Pi (or other computer) and any half-decent router, you can automatically gather the necessary data and have something to challenge your ISP with in the event any excessive bills arrive. Generally speaking we like to use Mikrotik routers, but we have also set this up with Cisco and Snapgear devices too.

The advantage of a Raspberry Pi is that it's cheap, tiny and if you know Linux - easy to set up.


The Raspberry Pi (or other Linux-based device) is up and running.
You have SNMP enabled on your router.
You have created a user on your Pi called "monitor" - which you'll use to run RTG.


Install mysql if it's not already installed:
$ sudo apt-get install mysql-server
- Remember the root password you enter here. We'll need it later.

Set up email:
$ sudo apt-get install exim4 exim4-config
$ sudo dpkg-reconfigure exim4-config
Tell it you have a smarthost but no local mail. When it asks for a smarthost, give it the address of your ISP's mail server.

Installing RTG (version 0.7.4 is current at the time of writing):
$ sudo apt-get install libmysqlclient-dev libsnmp-dev zlib1g-dev libdbi-perl libsnmp-session-perl mysql-client libsnmp15 screen
$ wget
$ tar xzf rtg-0.7.4.tar.gz
$ cd rtg-0.7.4
$ ./configure --bindir=/home/monitor/rtg --sysconfdir=/home/monitor/rtg --with-mysql=/usr --with-snmp=/usr --prefix=/home/monitor/rtg
$ make
$ make install
$ mkdir ~/rtg
$ cp etc/createdb bin/ etc/rtg.conf etc/ ~/rtg
$ cd ~/rtg
Create the database:
$ ./createdb mysqlrootpassword
- Substitute your root password here.

Configure RTG:

Edit the "/home/monitor/rtg/etc/routers" file. Delete the existing entries there, and add the following line to the file:
- substitute the IP address of your router for the address here.

Now create a targets.cfg file with the script. You should see something along the lines of the following:
$ ./
Poking (public) (32 bit)...
No router id found for
No id found for ether1 on device 1...adding.
No id found for ether10 on device 1...adding.
Now, tell RTG's polling script to run on bootup. Add the following to /etc.rc.local, before the "exit" line:
su -l monitor -c "screen -dmS rtgpoll /home/monitor/rtg/rtgpoll -vvv -t /home/monitor/rtg/targets.cfg -c /home/monitor/rtg/rtg.conf"

And lastly, set up a daily report email:
$ crontab -e
23 59 * * * /home/monitor/rtg/ \% -01d | mail -s "Daily bandwidth utilisation" [email protected]
Now, what we hope to achieve with all of this, is a daily email that looks like the following:
                                In      Out  Avg In Avg Out   Util   Util  Max In Max Out Max Ut Max Ut
Connection                  MBytes   MBytes    Mbps    Mbps   In %   Out%    Mbps    Mbps    In%   Out%
ether1      281    7,800    0.03    0.72   0.00   0.07    0.22    7.04   0.02   0.70
ether3       12       62    0.00    0.01   0.00   0.00    0.01    0.24   0.00   0.02
ether5       24       64    0.00    0.01   0.00   0.01    0.01    0.01   0.01   0.01
ether10    9,410      406    0.87    0.04   0.87   0.04    8.58    0.26   8.58   0.26
wlan1    2,429    3,879    0.23    0.36   2.09   3.27    5.99    5.18  54.45  47.09
bridge1      350    9,338    0.03    0.87   0.03   0.87    0.19    8.49   0.19   8.49
ISP    9,217      281    0.86    0.03   8.60   0.30    8.40    0.16  84.00   1.60

Total:                      21,723   21,830    2.02    2.04                  23.4   21.38
Here, we can see exactly how much data went in and out of the Internet interface - we can then take those figures and compare them against what our ISP claims we used.

The small amount of time spent on setting this up will pay for itself when it comes to bill time - and if you're keeping an eye on the daily usage mail, hopefully you can spot excessive usage before it becomes a bigger problem.

Thursday, 15 October 2015

Office Relocations - what to consider for ICT

Moving offices can be expensive and disruptive.  The ICT side of an office move can add pain, expense and disruption.  Over the years we have managed many office moves from the perspective of making sure that the the technology is working on day one.  There are plenty of things that can go wrong during a move, technology or otherwise.  Avoiding a situation where no one can send an e-mail or make a phone call is not difficult.  Some things to look out for to assist with a smooth move.

It is never too early to talk to telcos

Telcos can handle relocations and provisioning for voice and data services on specific dates.  Telstra, for example, will carry out what they call a “relocation” on a booked date.  It just helps to give considerable time for the wheels of bureaucracy to turn.  As soon as dates are fixed book your telco.  In fact, this is so important, don’t fix the moving dates without having your telco(s) on board; unless you fancy a quiet e-mail and phone call free first few days in the new place.

Consider duplicating data services

From our own experience “relocating” (ie moving data services at the time of an office move) is a recipe for raised blood pressure and strained conversations with service providers, typically late on a Friday afternoon.  Consider connecting duplicate services at the new premises in advance.  This only works for data, not voice because of the way phone numbers are transferred.  If you have data services provisioned and tested at new premises you do bring things back into your control, for example amending DNS records.

Do the new premises have what we need?

This possibly should have been at the top of the list.  But, if you’ve got far enough to be booking in the telco you really should know by now that the infrastructure in the new premises meets your requirements.  Just in case you need to double check:  the cabling is OK (or finished)?  You have access to telephony frames and risers?  There is adequate capacity on telephony frames and risers?  The lift or building access is spacious enough for your equipment?  A removal team, paid by the hour, waiting by a 2m high lift with a 2.1m comms cabinet won’t improve your mood.

Beware the NBN

Continuing from the paragraph above and assuming you are in Australia.  For reasons that are not really clear getting an NBN connection to a new building, that the NBN Co address check says is NBN connected, can take months.

No to scope creep

It is very tempting to think “I could just install this” whilst you have some downtime.  Think long and hard about adding to the list of variables you may be wondering about come Monday morning.  Monday morning’s after office moves are for accepting congratulations for how smoothly it’s all gone and pretending the last 72 hours have not aged you prematurely, not fire fighting.

Beware Murphy’s

We have had telcos booked months in advance say words to the effect of “not going to happen today” on Friday afternoon (not often, but it has happened).  Back in the days before ubiquitous fast data connections I have had to motorcycle courier large quantities of data, have booked and paid for two couriers for the entire day (in case one fell off!) only to have the first one take four hours to so 100kms (the Gantt chart allowed two hours for this).  We have had equipment go missing, and defamed the perfidy of movers, only for it to turn up in a hidden box two years later.  One for TCP/IP types:  you would never expect to have your new premises inadvertently connected to another tenant in your building (and this was a large office on Haymarket, in London, not a home help cabling job) and for them to share your IP address range!

Wednesday, 29 July 2015

Getting Office 365 to Work Without Completely Opening Up Internet Access

When it comes to Internet access I'm a firm believer in only granting the minimum access to get the job done.  I typically run a Squid proxy using Kerberos authentication, coupled with a default-deny firewall policy.

Office 365 isn't able to work through a proxy, even when no authentication is required.  If you can prove me wrong I would love to hear about it!

To work around this issue, I wrote a simple Python script to grab the address ranges from the XML file provided by the Microsoft Office 365 team.

I load the ranges into an array object and write out two files:

The first file contains the commands to create an address list for the firewall.  A rule is defined on the firewall to allow requests to TCP ports 80 and 443 if the destination address is within said address list.

The second file is a JavaScript PAC file telling the browser to go direct if the host resolves to an IP within the list.  I also add the loopback address and RFC 1918 addresses to the list.  If the host isn't in the list, it will fall back to using the defined proxy.

Here is a trimmed version of the file so you can see how it works:

function FindProxyForURL(url, host)
    var resolved_ip = dnsResolve(host);
    if (isInNet(resolved_ip, "", "") ||
        isInNet(resolved_ip, "", "") ||
        isInNet(resolved_ip, "", "") ||
        isInNet(resolved_ip, "", "")
        return "DIRECT";
    return "PROXY";

You can push out proxy settings using Group Policy.

While this isn't ideal (you can't monitor how much traffic users are using to/from Office 365) you can at least keep tabs on the rest of Internet usage.