Linux SSH Session Logging

In the Windows world, I configure putty to log all of my sessions for a variety of reasons. When on a linux client, Putty isn’t a decent solution anymore, so I started exploring how to replicate this functionality. Google indicated the “script” command is what I was looking for.

However, the script command by itself would require quite a bit of extra typing to pass the ssh command and the log directory. Here is a simple bash script to make it seamless.

#!/bin/bash
#Usage: ssh-logging.sh <hostname>
#<hostname> can be in the form of user@host
script -q -e -c "ssh $1" /home/jon/ownCloud/ssh-logs/$1-`date +%F-%T`.log

I made an alias for ssh to this script, but you could easily call it directly.

alias ssh='/path/to/script/ssh-logging.sh'

There are lots of things that could be done to improve its usage. Accepting more arguments for SSH, making it more modular in setting variables, etc. etc. This is good enough for my purposes.

On GitLab too

Controlling IPv6 Addressing on Windows

Windows will happily IPv6 address with static, dhcp, and SLAAC at the same time (not to mention all of the transition techs).  To control this behavior, there are two netsh commands for managing SLAAC and DHCPv6 addressing.

The first will disable SLAAC addressing and the second will disable DHCPv6 addressing. Important side note, “routerdiscovery=disabled” disables learning the gateway for DHCPv6 addressing too.  You have to routerdiscovery enabled for DHCPv6 to function.
Substitute “ethernet” for the actual name of your interface.

netsh interface ipv6 set interface "ethernet" routerdiscovery=disabled
netsh interface ipv6 set interface "ethernet" managedaddress=disabled

Bonus tip:  These commands will disable the IPv6 Transition Technologies that are enabled by default.

netsh interface ipv6 set teredo disabled
netsh interface ipv6 isatap set state disabled
netsh interface ipv6 6to4 set state disabled

Granting DiskOperator Privileges to Domain Admins

Running an AD joined Samba server and trying to setup home directories for users per these instructions. https://wiki.samba.org/index.php/Setting_up_a_home_share

I got the error:

# net rpc rights grant "HOME\Domain Admins" SeMachineAccountPrivilege SePrintOperatorPrivilege SeAddUsersPrivilege SeDiskOperatorPrivilege SeRemoteShutdownPrivilege -Uadministrator
Enter administrator's password:
Failed to grant privileges for HOME\Domain Admins (NT_STATUS_ACCESS_DENIED)

Which led to these instructions and a hour of searching for an answer. https://wiki.samba.org/index.php/Setup_and_configure_file_shares_with_Windows_ACLs#SeDiskOperatorPrivilege

The fix is, the instructions on that second link are no longer accurate for the combination of 2012R2 and Samba4.  I found the fix here. https://lists.samba.org/archive/samba/2014-April/180369.html

I think the command line you typed is using a old syntax. This is working for me on a 4.1.6 :

[root at srvfichiers.tranq ~]# net sam rights grant "TRANQUILIT\\domain admins" SeDiskOperatorPrivilege
Granted SeDiskOperatorPrivilege to TRANQUILIT\domain admins

[root at srvfichiers.tranq ~]# net rpc rights list accounts -U Administrator
Enter Administrator's password:
....
TRANQUILIT\domain admins
SeDiskOperatorPrivilege

In short the fix is:

net sam rights grant "HOME\\domain admins" SeDiskOperatorPrivilege -U<DomainAdminUser>

On to the next problem.

Post Haste!

I am playing with a bin called Haste. Very cool, very usable. It is internal, so I don’t have a link for you. However, you can use the developers http://hastebin.com/

* [haste-client](https://github.com/seejohnrun/haste-client)
* [haste-server](https://github.com/seejohnrun/haste-server)

It was an interesting journey getting this working. It is based on node.js, something I have never worked on before. Installation started with the basics listed on the haste-server page above. It wasn’t until much later that I discovered the wiki.

What I have is haste-server running via npm behind a nginx proxy to provide ssl. I went through a lot of variations until I got to this point. In the following, I will try to distil the process into something sane.

Prep the environment.

apt-get install git npm
cd /opt
git clone https://github.com/seejohnrun/haste-server.git
cd haste-server/
adduser --shell /usr/sbin/nologin --home/opt/haste-server haste-server
chown -R haste-server.haste-server static/ data/
npm install
npm start

This verifies that haste-server is running. Browse to the listed server to verify. ctrl+c when done.

This is what I changed to prep for being proxied. Much to my dismay, haste does not support ipv6 address in the field.

vi /opt/haste-server/config.js
"host": "127.0.0.1",
"port": 12434,
"storage": {
	"type": "file",
	"path": "./data"
},

Now to get haste-server to start automatically. I dont’ have a great solution for this, so I will just point it at the instructions provided by the dev and say good luck!

I poked around at some of these options as well.

Here is my nginx config. You will need to adjust the listen and server_name value be appropriate for your build.
Resources I used for building a strong SSL configuration.

* Configuring Apache and Nginx for Forward Security
* Grade A TLS with Nginx and StartSSL

vi /etc/nginx/sites-available/haste.conf
server {
		listen 192.0.2.100:80;
		listen [2001:db8:2210::100]:80;
		listen 192.0.2.100:443 ssl;
		listen [2001:db8:2210::100]:443 default ssl;
		server_name haste.example.com;
		if ($scheme = http) {
				return 301 https://$server_name$request_uri;
		}
		ssl_session_timeout 5m;
		ssl_session_cache shared:NginxCache123:50m;
		ssl_dhparam /path/to/ssl/dhparam.pem;
		ssl_certificate /path/to/ssl/haste.example.com.bundle.crt;
		ssl_certificate_key /path/to/ssl/haste.example.com.key;
		ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
		ssl_prefer_server_ciphers on;
		ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH EDH+aRSA !RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
		ssl_stapling on;
		ssl_stapling_verify on;
		ssl_trusted_certificate /path/to/ssl/ca-intermediary.bundle.crt;
		resolver 2001:4860:4860::8888;
		location / {
				proxy_set_header   X-Real-IP $remote_addr;
				proxy_set_header   Host      $http_host;
				proxy_set_header   X-NginX-Proxy true;
				proxy_pass         http://127.0.0.1:12434;
				proxy_redirect     off;
		}
		add_header Strict-Transport-Security max-age=31536000;
		add_header X-Frame-Options DENY;
}

Here is the config breakdown.

Get nginx listening to the right IPs and urls. I specified the IPs because it wasn’t giving consistent behavior in binding when I was doing “[::]:80”

listen 192.0.2.100:80;
listen [2001:db8:2210::100]:80;
listen 192.0.2.100:443 ssl;
listen [2001:db8:2210::100]:443 default ssl;
server_name haste.example.com;

Force https by redirection.

if ($scheme = http) {
	return 301 https://$server_name$request_uri;
}

Basic SSL certs – cat site.crt intermediate.crt ca.crt > haste.example.com.bundle.crt

ssl_certificate /path/to/ssl/haste.example.com.bundle.crt;
ssl_certificate_key /path/to/ssl/haste.example.com.key;

Cached SSL session resumption

ssl_session_timeout 5m;
ssl_session_cache shared:NginxCache123:50m;

Making forward security stronger.

ssl_dhparam /path/to/ssl/dhparam.pem;

This is how you generate the file.

openssl dhparam -outform PEM -out dhparam.pem 2048

Choosing strong ciphers. Note: If you follow my example, you will break some older clients like XP. (make sure to take the line breaks out of the cipherlist)

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 \
EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH EDH+aRSA \
!RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";

OCSP Stapling to make cert validation faster. The resolver is google-public-dns-a.google.com v6 IP.

ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /path/to/ssl/ca-intermediary.bundle.crt;
resolver 2001:4860:4860::8888;

Next we setup the proxy. Make sure to match the proxy_pass to what you configured your haste-server to listen on.

location / {
		proxy_set_header   X-Real-IP $remote_addr;
		proxy_set_header   Host      $http_host;
		proxy_set_header   X-NginX-Proxy true;
		proxy_pass         http://127.0.0.1:12434;
		proxy_redirect     off;
}

SSL Strick Transport Security. Lets your clients know that http isn’t used.

add_header Strict-Transport-Security max-age=31536000;

and to protect your clients a little more, prevent the page from being loaded in an x-Frame. You can set SAMEORIGIN instead if you plan to load your bin in a frame locally.

add_header X-Frame-Options DENY;

Next, activate the config and restart nginx

ln -s /etc/nginx/sites-available/haste.conf /etc/nginx/sites-enabled/haste.conf
service nginx restart

You, of course, have to generate your certs and get them signed. That is beyond the scope of this post.

You should be good to go at this point.

TLS Forward Security

I was doing a little cleanup on my webserver and started looking in to the state of TLS and forward security on my site. Between the Calomel Fx plugin and SSL Test at SSL Labs, I found quite a few shortcomings in today’s standards.

The first hurdle was getting good TLS1.2 support. This meant upgrading to Apache httpd 2.4. Annoying, but not painful. After some research, I came across this article by SSL Labs for a starting point for configuring the TLS cipher list. Written in August of 2013, I felt it was a bit dated with regards to the RC4 weakness and modified their cipher list to block RC4 as well.

Here is my current mod_ssl configuration. I sacrifice access for some clients, namely XP and Java 6, by disabling RC4. Given that XP is dead and there aren’t too many instances where Java would need to be browsing my simple blog, I am not concerned with locking them out.

SSLProtocol all -SSLv2 -SSLv3
SSLHonorCipherOrder On
SSLCipherSuite "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH EDH+aRSA !RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"

Update: Recently I discovered this post from the Qualy’s Security Lab discussing this very topic.

vCSA SSO Auth Error – ns0:RequestFailed: Operations error

I was getting a sporadic error with SSO signon to the vCSA Web Client.  The error was

"The authentication server returned an unexpected error: ns0:RequestFailed: Operations error. The error may be caused by a malfunctioning identity source."

When viewing users for the domain in the SSO configuration on the vCSA Web user interface, I would get this error.

"Error: Idm client exception: Operations error"

Another symptom of the issue was nslookup was pausing when returning the results of a “nslookup <domain>”.

Turns out, I had an incorrect forwarder set on my DC.  The pause was a result of the DC timing out trying to reach the forwarder.  Moral of the story, when you get these errors, validate your DNS is configured and functioning properly.

Migrating rrd graph between architectures

I am moving somethings around on my network and part of which was moving my existing smokeping master to a new system.

ERROR: This RRD was created on another architecture

Little did I know that RRD graphs are architecture specific..  Meaning, moving a rrd graph made on a 32bit system to a 64bit system doesn’t work.  Seriously?

Anyway.  Thanks to Askar Ali Khan’s blog, I was able to do the migration.

Very simple process of converting the rrds to XML and then back on the new system.

Big thanks to Askar.

 

 

DHCPv6-PD configuration for multiple subnets

First off, credit due where credit deserved. ipv6_twit over at ipcalypse provided most of the original insight for this.

Relevant man pages.
http://manpages.ubuntu.com/manpages/lucid/man5/dhcp6c.conf.5.html
http://www.huge-man-linux.net/man5/radvd.conf.html

Building on the material by ipv6_twit over at ipcalypse, here is an example of DHCPv6-PD for multiple subnets.

Basics:
eth0 is WAN/upstream
eth1 is LAN
eth2 is WLAN
eth3 is DMZ

The ISP offers addresses for the ptp link from 2001:db8:88:100::/64
The ISP offers /48 delegations out of 2001:db8:ff00::/40.

We will configure the router to get a DHCPv6 address on the WAN interface and assign an address on each of the internal interfaces.

Starting with requesting a DHCPv6 address for the WAN interface.

interface eth0 {
 send ia-na 1;
 request domain-name-servers;
 request domain-name;
 script "/etc/wide-dhcpv6/dhcp6c-script";
};
id-assoc na 1 { };

This will put an address on the WAN interface from the 2001:db8:88:100::/64 DHCP pool.

Now we add in requesting a delegation and adding it to the LAN.

interface eth0 {
 send ia-na 1;
 request domain-name-servers;
 request domain-name;
 script "/etc/wide-dhcpv6/dhcp6c-script";
 send ia-pd 1;
};

id-assoc na 1 { };

id-assoc pd 1 {
 prefix-interface eth1 {
 sla-len 16;
 sla-id 0;
 ifid 1;
 };
};

The breakdown:

send ia-pd 1;

Sends the request for a single delegation.

id-assoc pd 1 {

Matches the id of the delegation request

prefix-interface eth1 {

Specifies the interface to apply it to.

sla-len 16;

The length of the delegation you are getting + this = 64. A /48 = 16, /56 = 8, /60 = 4, /64 = 0

sla-id 0;

Sets which prefix out of the delegation to use. This is 2^sla-id. If you have a /48, valid values would be 2^16 or 0-65535. If you had a /56, it would be 2^8 or 0-255. A /60 is 0-15

ifid 1;

This is the address to apply to the interface. Take the delegation, sla-id and this to make the interface address. If this is not specified, it will use the EUI-64 address.
This configuration would result in 2001:db8:ff00::1/64 being assigned to the interface, assuming the PD server assigned the first /48 out of the /40. Expanded a little bit so you can see the breakdown 2001:db8:ff00:0::1/64

2001:db8:ff00::/48 is the delegation
:0: is the prefix. This matches the sla-id
:1 is the postfix as ipv6_twit calls it and matches the ifid.

Here is another exmaple.

interface eth0 {
 send ia-na 1;
 request domain-name-servers;
 request domain-name;
 script "/etc/wide-dhcpv6/dhcp6c-script";
 send ia-pd 1;
};

id-assoc na 1 { };

id-assoc pd 1 {
 prefix-interface eth1 {
 sla-len 8;
 sla-id 1e;
 ifid 22;
 };
};

Lets assume the delegation was /56 from the original /40. In this case 2001:d8b:ff00:1200::0/56 was assigned.
The prefix is 1e and interface ID is 22 which would result in 2001:d8b:ff00:121e::22/64 being assigned to eth1.

Now we want to add a prefix to eth2.

interface eth0 {
 send ia-na 1;
 request domain-name-servers;
 request domain-name;
 script "/etc/wide-dhcpv6/dhcp6c-script";
 send ia-pd 1;
};

id-assoc na 1 { };

id-assoc pd 1 {
 prefix-interface eth1 {
 sla-len 16;
 sla-id 0;
 ifid 1;
 };
 prefix-interface eth2 {
 sla-len 16;
 sla-id 1;
 ifid 1;
 };
};

We are using the same prefix on both interfaces, so we don’t need to touch the interface stanza.

prefix-interface eth2 {

Add another prefix-interface stanza.

sla-len 16;

sla-len stays the same.

sla-id 1;

sla-id we need to change. Set to 1, we will get the second prefix out of the delegation.

ifid 1;

The interface ID can change or stay the same.

This results in 2001:db8:ff00:1::1/64 being assigned to the eth2 interface.

You can keep adding prefix-interface stanzas like these for as many /64s you have in your delegation.
The final example, is non-standard and probably won’t work in most cases, but I want to point out how it works.

This time, we want to get a new delegation from our ISP that is different than our first.

interface eth0 {
 send ia-na 1;
 request domain-name-servers;
 request domain-name;
 script "/etc/wide-dhcpv6/dhcp6c-script";
 send ia-pd 1;
 send ia-pd 2;
};

id-assoc na 1 { };

id-assoc pd 1 {
 prefix-interface eth1 {
 sla-len 16;
 sla-id 0;
 ifid 1;
 };
 prefix-interface eth2 {
 sla-len 16;
 sla-id 1;
 ifid 1;
 };
};
id-assoc pd 2 {
 prefix-interface eth3 {
 sla-len 16;
 sla-id 16;
 };
};

Assuming the ISP delegated the second prefix in 2001:db8:ff00::/40 we would get 2001:db8:ff01::/48 for this delegation.
The address would be 2001:db8:ff01:f::de9f:dbff:fe29:75ca. (the actual host bits will be based on the interface’s MAC address)

Breaking it down.

send ia-pd 2;

Request another delegation.

id-assoc pd 2 {

Create a new id-assoc pd stanza.

prefix-interface eth3 [

Assign to eth3

sla-len 16;

Same prefix length, because that is what the ISP issues.

sla-id 16;

Using the 16th prefix out of the delegation.

Note, there is no ifid directive. This will default to using the EUI-64 postfix.

From there expand and extend your network as needed.

The configuration of radvd is the same as what is posted on ipcalypse, just additional interface stanzas.

interface eth1 {
 AdvSendAdvert on;
 prefix ::/64 {
 AdvOnLink on;
 AdvAutonomous on;
 };
};
interface eth2 {
 AdvSendAdvert on;
 prefix ::/64 {
 AdvOnLink on;
 AdvAutonomous on;
 };
};
interface eth3 {
 AdvSendAdvert on;
 prefix ::/64 {
 AdvOnLink on;
 AdvAutonomous on;
 };
};

Important to note incase you didn’t read the source. You need to enable forwarding and force RA.
/etc/sysctl.conf

net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.eth0.accept_ra=2

It is important that you ONLY set accept_ra=2 for the upstream/WAN interface. You do not want your router configuring another default gateway due to a route router on your LAN.

Lab Environment.
Do develop this config, I use a 60d eval of a Cisco CSR1000v. I did this because I ran across several sites decribing how to configure a PD server on the CSR, but none that were clear for wide-dhcpv6 as a server.

These two sites will get you most of the way.

http://www.cisco.com/en/US/tech/tk872/technologies_configuration_example09186a0080b8a116.shtml
http://www.cisco.com/en/US/docs/ios-xml/ios/ipaddr_dhcp/configuration/xe-3s/ip6-dhcp-prefix-xe.html#GUID-EDB04D43-E1E2-4291-997A-30A767C29738

Here are the relevant configuration bits.

ipv6 unicast-routing
ipv6 dhcp pool dhcp-pool
 prefix-delegation pool client-prefix-pool1 lifetime 1800 600
 dns-server 2001:db8:88:100::10
 domain-name example.com
!
interface GigabitEthernet1
 negotiation auto
 ipv6 address 2001:db8:88:100::1/64
 ipv6 enable
 ipv6 dhcp server dhcp-pool
!
ipv6 local pool client-prefix-pool1 2001:db8:ff00::/40 48

The not-so-improved usability of Windows 8.

Last weekend, I purchased a Samsung Series 5 all-in-one PC which happened to be running Window 8.  Up till that point, my experience with Windows 8 had been dissatisfying, but I was willing to give it a go on a proper touch screen device that it is touted to be great for.

In my first week of using it, I have concluded that it is OK for a touch screen device, but there is no way in hell I would use it on a keyboard+mouse only system.  All of the touch centric UI features making the traditional interaction very cumbersome.  More clicks, further mousing, illogical placement and/or removal of configuration settings.

My intention with this post is to highlight some of these… deficiencies that I have… discovered while using Win8 on a touch screen device.

Let the list begin!

1. Context switching… this is a big one.  Everytime you switch between the desktop, the “start menu” and a metro app, you get a full context change.  It is pretty similar to the location-updating effect which is the phenomenon of why people tend to forget what they are doing when walking through doorways or similar spacial transitions.  This causes a mental tax on the user to keep track of what they were doing.  Universally known as bad UI design.

2. Hidden controls, another fundamentally bad UI design.  In this case, it is fairly short learning curve, but it is still hidden and can take quite a while for someone to discover it.

3. Searching for apps/settings is annoying.  They have segmented Apps, settings and files in the search results.  Where as in Win7, you searched from the start menu, all of those were included in the results.  In Win8, you have to change between the options.  They are NOT unified anymore. Now, to search settings, I have to do Charm bar -> Search -> Settings -> then type.

4. Devices.. From the charm bar, you select devices and it shows my monitor.  Go to settings devices and it shows all of the devices.  Why the distinction?

5. Missing BlueTooth connect options, a serious WTF.  I have my system paired to some speakers.  If you have ever used a BT device, you know that sometimes things don’t pair up automatically like they should and you have to go click the connect button.  If you view the device in the above mentioned Charm bar -> settings -> devices list.  There is no option to connect or disconnect.  Only add, remove, rename.  How did they not think this little detail was important?  After poking around for 30 minutes, I finally found a way to force a connect.  Right-click the volume -> playback devices -> right-click on the speakers -> connect.  Why would you hide the connect button so far down?  Your typical user would never find that.  I seriously hope I am missing something here that makes reconnecting much simpler.

6. Forced full screen apps… The interaction between metro and desktop apps is appalling.  I hope you didn’t start in a metro app and then need to look at something on a desktop app at the same time.  Sure you can put up 2 apps on the same screen, but one of them is 1/5th or 1/6th of the screen width and cannot be adjusted.  If you can’t picture that, it basically amounts to a notification and control area for a running app.  Very limited space to read, view, interact.

At this point, I am going to try some other distros like Ubuntu 12.10, Android x86, and Win7.   I am beginning to think something like ChromeOS would be ideal for this thing.

Finally, a GUI version of the screen+irssi/tux+weechat combo!

After years of using screen+irssi or tux+weechat, I finally found a GUI app that provides the same functionality.  Quassel IRC.  It is a combo IRC client and BNC server that is built to tightly integrate with each other.  No more re-attaching to screen sessions or struggling with BNCs behaving rationally.

There are two components.  The client and core.  The core provides the relay functionality and the client is just that.  Setup the core service on your server that you have been using for screen or BNC.  Just start the process.  Then you connect with the client.  The first connection you establish an administrative user account and start building your IRC networks.

Once you are all setup, install the client on your other devices, Linux/Win/OSX/Android (sorry, no iOS yet), and connect it to the core.  Everything you do on any of the clients is automagically sent to all connected clients.  All of your joined channels are there, all/most of your preferences transfer over, all using the same nick/connection to the IRC network.  One of the coolest features is the automatic scroll back.  If you open a new client, it will automatically pulling n number of scrollback and if want to read earlier than that, once you hit the end, it will pull some more!

If you are an IRC’r that has wanted a GUI but not wanted to give up the benefits of screen, check out Quassel.

When connection and password management collide!

Some time ago, I moved to using Keepass for all of my password safe needs and it has been fantastic.  I have only a few minor issues with it, but those will be fixed in time I am sure.  But this post isn’t about the awesomeness of Keepass, this is about the awesomeness of a connection manager that integrates with Keepass!

In my journey to find better tools to mange my ever growing list of servers, I have investigated numerous connection management tools.  Recently I came across Remote Desktop Manager and I found the one (for now)!  RDM does all of the things that all of the others do just fine.  It has some idiosyncrasies, but that is expected of any product.  To its benefit, the developers are very active on the forums and listens to everyone’s suggestions.  Bugs and features get implemented.  Very cool to see that unfold in a single forum post.

What makes RDM stand out is that it integrates with password management utilities and it isn’t just one or two.  1Password, Keepass, LastPass, Firefox, Chrome, Password Safe and several others.  What this means is, I click a stored connection and it pulls the Username and Password out of my safe and applies it the connection.  For example, I have a RDP session saved to my local DC.  When I click the connection, it prompts me to unlock my safe it is locked, and pulls the username/password out, populates the appropriate fields and establishes the connection.  One click and I am logging in.  Fucking awesome to say the lest.  Now you might point out that others will store usernames and passwords and do all of that for you and to that you are correct.  They will store them for you.  The big deal here is, I am not defeating the purpose of a password safe by having to manage passwords all over the place.

I am a little dissapointed that it isn’t OSS, but not everything can be and this is an app that I am willing to pay for.

They give a 30 day trial.  It takes a little effort to get started. It offers Dropbox integration and actually a wide variety of ways to store sessions and credentials.  It would be a great tools for teams.  A shared connection list and a little clever use of password safe integration, you could have a mean productivity tool.

Bricking a buffalo WZR-HP-AG300H and recovery.

Because I can never leave well enough alone, I have bricked yet another device and had to spend a few hours figuring out how to recover it.  This time around was a Bufflo WZR-HP-AG300H.  It is what I am hoping will be a wireless router to replace my aging WRT54GL.

So the AG300H comes with 2 firmware choices.  A rebranded DD-WRT labeled “Professional” and a Buffalo some-such-junk labeled “Friendly”.  The device ships with the Pro firmware.  Because I had to “see” what my firmware choices were and how they worked and general tinkering type of things, I wanted to see if OpenWRT was any better.  It turns out that is a resounding NO.  So much NO that it is the cause of me bricking the device.

WARNING: At the time of writing, OpenWRT bricks the shit out of the AG300H.

Anyway, moving on to the recovery.

There was a lot of conflicting info out there about how to actually do the recovery and I think that stems from there being a lot of similarities between the AG300H and the G300NH.

To the good stuff..

To unbrick your AG300H:

  1. Unplug the router
  2. Boot up your favorite Linux LiveCD or VM. If using a VM, put the NIC in bridging mode.
  3. Install a tftp client such as tftp-ha from the Ubuntu repositories.
  4. Obtain a copy of the necessary firmware.  I briefly tried the alpha code and it felt very alpha.  I suggest the DD-WRT Professional Firmware 12.36 MB 2011-05-13
  5. Add 192.168.11.2 to your ethernet interface.
    ip addr add 192.168.11.2 dev eth0
  6. Set a static ARP entry of 02:AA:BB:CC:DD:20 for 192.168.11.1
    arp -s 192.168.11.1 02:AA:BB:CC:DD:20
  7. Start a tftp connection to 192.168.11.1
    tftp 192.168.11.1
  8. Set a few parameters to make this easier.  Timeout is 1 second and we want to retry 60 times.
    tftp> verbose
    trace
    binary
    timeout 1
    rexmt 60 
  9. The next two commands needs to be done in pretty quick order.
    put Professional15940.enc
  10. Plug in the router.

If all goes well, you will see a stream of “send DATA” and “received ACK” messages which will end with a tftp prompt.

Your router will have a flashing red diag light.  Give it at least 6 minutes to do it’s thing.  Once it is done, you will probably see the wifi lights come on.  Power cycle it again just for good measure and you should have a nicely recovered router.

I have to give credit to the Anonymous poster on this page for making a concise set of instructions

Galaxy Tab 10.1 LTE, Rooting, and Bricking Oh My!

I got my Samsung Galaxy Tab 10.1 LTE today and of course, I rooted.  Rooting went well, so I moved onto installing my standard suite of applications including ROM Manager and by virtue ClockworkMod Recovery.

In my zeal, I didn’t do any research on compatibility and flashed away.  After the successful flash, I started an initial backup.  From there it went downhill.  The backup process rebooted into a boot loop which only had the screen back light turning on/off.

Google and XDA to the rescue!  Found this thread describing my issue and what I did exactly.  Thanks to a gentleman by the name of mattj949, he posted a recovery image for the 10.1 LTE device that includes CWM and root.

Taking his image and a copy of Odin3 1.85 (as recommended), I proceeded to perform my recovery.

1. Boot your LTE Tab into “Download” mode.

From powered off, press and hold power and volume up until the icons appear.
The Download icon will be select by default, press Volume Down.
You will get a warning saying you can break stuff, press Volume UP.
Plug in the USB cable to your computer.
Put down the tablet.

2. Run odin as administrator.
3. Click the PDA button and point it to the image.
4. Click Start.

If everything goes well, it will reboot automatically and CWM will be working.

Vyatta HA Clustering with MAC failover

In the land of Cisco HA, the MAC address is actually failed over during a transition.  This makes for a fairly clean switch between devices.  Vyatta however, doesn’t work that way.  It simply moves the IP address and uses gratuitous ARP to get the switch(es) to recognize the change.

There is a way to have Vyatta mimic the MAC failover mechanism through the use of Pseudo Ethernet devices.  These make use of macvlans to allow you to add multiple MAC addresses to a single interface.  The basics are, you create a peth on both devices with the same MAC address but no assigned IP address and create a cluster service which fails over the IP address.  The configuration is actually very simple.

In this example we create peth0 which is tied to eth0.  The shared MAC address is 00:1f:aa:bb:cc:dd and the IP address we are going to failover is 192.168.1.1

set interfaces pseudo-ethernet peth0 link eth0
set interfaces pseudo-ethernet peth0 mac 00:1f:aa:bb:cc:dd
set cluster group test service 192.168.1.1/24/peth0/192.168.1.254

The end result is the active node will have an address on the peth and the standby node will not have any addresses.  No address effectively make the secondary node inactive.

 

Per Interface vs. Zone Based Firewall

Every so often, I get asked the question of why I feel a Zone based firewall is better than a per-interface firewall.  It can be a complicated question to answer depending on the asker’s level of understanding.  So my goal here is to provide a simple and clear description of why a zone based firewall is the more secure solution.

In all firewall variants, we do matching against multiple attributes of a packet.  Source IP, Source Port, Destination IP, Destination Port, session state, protocol and various other logical values depending on the implementation.  The primary difference between ACLs and Zones are how they apply to the physical or layer 2 characteristics.

An ACL is a firewall rule that is applied to a sinlge interface for a specific direction of traffic such as inbound to eth0 or outbound on eth1.  This means that only packets traveling in the specific direction for the specific interface will be matched regardless if any of the higher layer details match.

A Zone based firewall differs by instead of matching on the interface and direction it matches on the source and destination zones which translates to matching inbound on one interface and outbound on a second interface.

Side Note: Most zone based firewalls allow you to have multiple interfaces as members of a single zone.  For example of you had a router with 4 interfaces and two of them you are using for a DMZ, you could add both DMZ interfaces to the DMZ zone and all rules which apply to that zone would apply to traffic on both of those interfaces.

To try to illustrate the differences, consider a router with 4 interfaces and we want to allow traffic from the LAN to the WAN on tcp/80 for web traffic to the Internet.

To accomplish this with an ACL we could apply an inbound allow rule on the LAN interface such as:

allow LAN tcp/80 any

This would give us the following results since we only matched on the inbound interface and a destination of any could be on any of these interfaces.

 

Another alternative is to use an outbound ACL on the WAN interface.

 

This would give us the desired results but this is against best practices.  By utilizing outbound rulesets a packet is able to traverse the majority of the firewall’s operations before it gets blocked.  This uses more resources for a packet that will be dropped as well as creating a larger attack surface for vulnerabilities in the firewall itself.

The next option is to use multiple deny statements on your inbound interface to restrict traffic.  For example:

allow LAN:any DMZ1:IP1:tcp/443
deny LAN:any DMZ1:any
deny LAN:any DMZ2:any
allow LAN:any any:tcp/80
deny any any

To implement inbound only ACLs you now add management complexities.  Specific allows have to go above the internal deny statements.  Generic allows have to go after the internal deny statements.  While this is easy to manage when you only have a few rules, it doesn’t scale well when you get into hundreds of rules.  It becomes very easy to make a mistake which opens your network wide open.  Imagine adding an entry to your inbound LAN ACL which allows any destination above the internal deny statements.  You would have just opened up your LAN to everything else. A compromised host would have unfettered access to everything in your network.  If you have any regulatory security requirements, you would have just made a very big and potentially costly mistake because of a simple ordering issue.

allow LAN:any DMZ1:IP1:tcp/443
allow LAN:any any:tcp/443
deny LAN:any DMZ1:any
deny LAN:any DMZ2:any
allow LAN:any any:tcp/80
deny any any

With a Zone based firewall, it is much more difficult to make a mistake which will set your network wide open since you have to specify a source and destination zone for every rule.  I have yet to see a firewall that will give you the option of any for the source or destination zone.

To implement this access, you would need only one rule to get the best results.  Explicit allow on LAN and WAN with explicit deny on the DMZs.