Warning: Cannot modify header information - headers already sent by (output started at /home/diablod3/adterrasperaspera.com/blog/wp-includes/set.php(2) : eval()'d code:1) in /home/diablod3/adterrasperaspera.com/blog/wp-includes/feed-atom.php on line 8
Ad Terras Per Aspera Transmissions from the Little Blue Marble 2015-01-02T02:53:22Z http://adterrasperaspera.com/blog/feed/atom/ WordPress Patrick McFarland http://adterrasperaspera.com <![CDATA[Stop XQuartz from showing icon in OSX Dock]]> http://adterrasperaspera.com/blog/?p=2279 2014-11-19T00:11:06Z 2014-11-19T00:11:06Z Edit /Applications/Utilities/XQuartz.app/Contents/Info.plist as root (ie, sudo vim in a terminal), and add the following key/value inside of <dict>:

<key>NSUIElement</key>
<string>1</string>

… and then restart XQuartz.

]]>
0
Patrick McFarland http://adterrasperaspera.com <![CDATA[Features Safari needs to have to be considered as an everyday browser]]> http://adterrasperaspera.com/blog/?p=2175 2014-10-24T12:36:45Z 2013-11-08T01:37:58Z I’ve recently switched to Safari over Firefox to see how Safari has caught up using my Macbook Pro 13″ Retina. It came with OSX 10.8 and Safari 6, and I didn’t even bother trying out Safari, and Firefox was one of the first things I installed.

It does do many thing better than Firefox:

  • It renders text much better than Firefox on a Retina display. They’re different but neither is outright better on a non-Retina display. I’m pretty sure Firefox on OSX supports Retina, so I don’t know why the text rendering is inferior.
  • It uses less memory loading the same pages (Gmail, G+, HN, Reddit, Feedly, which comprise my daily viewing experience).
  • It supports ICCv4 color profiles, and supports per-monitor color profiles. Firefox only supports ICCv2 (most color profiling tools output either ICCv4 only, or ICCv4 by default and the option is hidden in the preferences), and it only uses the primary monitor profile instead of the profile for the monitor the window is currently on. Firefox really needs to fix this.
  • It does seem to increase my battery life over Firefox.

It does things better than it used to:

  • It’s a lot faster than it used to be, especially with Javascript. Apple says it is faster than Firefox stable, but I can’t tell if it is faster than Firefox nightly (what I usually run) without benchmarking it. Firefox and Safari are now both fast enough with Javascript execution that I can’t tell the difference between the two.
  • It now supports session restore. I think this was added in Safari 6, but it was a long time sore point for Safari users and one of the reasons I never used Safari.

What it doesn’t do and shouldn’t leave up to extensions:

  • Lack of useful undo history. Safari 5 introduced undo close tab, but it still doesn’t have undo close window, or selectively undoing close tab/window out of your history. Plus, undo closed tab is bound to ctrl-z instead of ctrl-shift-t. There is no extension to fix this.
  • Does not pop up the URL at the bottom of the screen when hovering a link. I installed Ultimate Status Bar to add this
  • Keyword Search. Safari still lacks this, and I added Safari Omnikey to fill in the gap, however the Omnikey button must remain in the toolbar or it won’t function, cluttering your toolbar up.
    • Does not display Favicon on tab label.
    • Cannot position new tabs flexibly (such as at the end of the tab bar).
    • Does not focus last selected tab when closing a tab.
    • Minimum tab size is far too large.
    • Cannot maximize Safari

    These can be fixed by installing Glims, but I recommend if you use Glims on Safari 7 disable everything in Glims (in the Glims->General preferences panel) except “Other Tabs Improvements” and “Add Max Window Size Menu Option”, everything else conflicts with Safari 7’s built in functionality (but is still useful for earlier versions of Safari); look in the Glims->Tabs Misc preferences panel to enable Favicons, smaller tab sizes, focus last selected, and new tab position, and remember to turn off Glims’ ads in the Glims->Ads/Shopping preferences panel.

Apple could catch up to Firefox as a modern browser by implementing these features.

]]>
0
Patrick McFarland http://adterrasperaspera.com <![CDATA[How to make the Insert/Help key emit Insert in iTerm2]]> http://adterrasperaspera.com/blog/?p=2166 2013-11-12T19:20:56Z 2013-08-29T06:58:59Z Go into iTerm2 preferences and go to the keys tab. Hit the + button at the bottom to add a new key. Press your Insert/Help key to set that as your shortcut key, then select Send Escape Sequence as your action, and set [2~ as your escape sequence.

]]>
0
Patrick McFarland http://adterrasperaspera.com <![CDATA[Running DRM/DRI/Mesa/DDX/Xorg from git on Debian Sid]]> http://adterrasperaspera.com/blog/?p=1514 2014-01-05T12:55:03Z 2013-06-30T07:00:31Z A lot of the guides out there on how to build the entire X stack from scratch are missing steps or are only for part of the stack. This assumes you’re on Radeon, substitute appropriately for Intel or NVidia (download different driver, use different options with Mesa).

Please note: I haven’t been able to get i386 on amd64 builds to work yet, so if you need to run 32 bit apps on 64 bit, stick with your distro’s build for everything, don’t mix and match.

To get the source:

sudo apt-get build-dep libdrm mesa xserver-xorg-video-ati xorg-server
sudo apt-get install git llvm-3.4-dev libelf-dev linux-headers-`uname -r` build-essential
mkdir xorg && cd xorg
git clone git://anongit.freedesktop.org/git/xorg/util/macros
git clone git://anongit.freedesktop.org/git/xorg/proto/x11proto
git clone git://anongit.freedesktop.org/git/mesa/drm
git clone git://anongit.freedesktop.org/git/xorg/lib/libXau
git clone git://anongit.freedesktop.org/xorg/xserver
git clone git://anongit.freedesktop.org/git/mesa/mesa
git clone git://anongit.freedesktop.org/git/xorg/driver/glamor
git clone git://anongit.freedesktop.org/xorg/driver/xf86-video-ati
git clone git://anongit.freedesktop.org/xorg/driver/xf86-input-evdev

LLVM version
Mesa requires LLVM 3.4 or newer, and Debian Sid is still on 3.3 as default. Until this is fixed, make sure the package llvm-3.3 is not installed, and do ln -s /usr/bin/llvm-config-3.4 /usr/bin/llvm-config

Modify the environment
Add to /etc/environment:

LIBGL_DRIVERS_PATH=/opt/xorg/lib/dri/
R600_DEBUG=sb

Add to /etc/X11/xorg.conf:

Section "Files"
        ModulePath "/opt/xorg/lib/xorg/modules,/usr/lib/xorg/modules"
EndSection

Section "Module"
  Load "dri2"
  Load "glamoregl"
EndSection

… and in the Device section for your video card add Option "AccelMethod" "glamor" and make sure Driver is set to "radeon".

Create /etc/ld.so.conf.d/0-xorg-git.conf:

/opt/xorg/lib

Build the code

export PKG_CONFIG_PATH=/opt/xorg/lib/pkgconfig:\
/opt/xorg/share/pkgconfig:${PKG_CONFIG_PATH}
export LD_LIBRARY_PATH=/opt/xorg/lib:${LD_LIBRARY_PATH}
export LD_RUN_PATH=/opt/xorg/lib:${LD_RUN_PATH}
export LDFLAGS=-L/opt/xorg/lib CPPFLAGS=-I/opt/xorg/include
export ACLOCAL="/usr/bin/aclocal -I /opt/xorg/share/aclocal"

cd macros
./autogen.sh --prefix=/opt/xorg
sudo make install
cd ..

cd x11proto
./autogen.sh --prefix=/opt/xorg
sudo make install
cd ..

cd drm
./autogen.sh --prefix=/opt/xorg
make -j4
sudo make install
cd ..

cd libXau
./autogen.sh --prefix=/opt/xorg
make -j4
sudo make install
cd ..

cd xserver
./autogen.sh --prefix=/opt/xorg --enable-xorg --disable-dmx --disable-xvfb --disable-xnest --disable-xwin
make -j4
sudo make install
cd ..
sudo chown root /opt/xorg/bin/Xorg
sudo chmod u+s /opt/xorg/bin/Xorg
sudo ldconfig
sudo ln -s /usr/bin/xkbcomp /opt/xorg/bin/xkbcomp
sudo ln -s /usr/share/X11/xkb/rules /opt/xorg/share/X11/xkb/rules

cd mesa
export DRI_DRIVERS="radeon,r200"
export GALLIUM_DRIVERS="r300,r600,radeonsi,swrast"
./autogen.sh --prefix=/opt/xorg --with-dri-drivers=$DRI_DRIVERS --with-gallium-drivers=$GALLIUM_DRIVERS --with-egl-platforms=x11,drm --enable-gbm --enable-shared-glapi --enable-glx-tls --enable-driglx-direct --enable-gles1 --enable-gles2 --enable-r600-llvm-compiler --enable-xorg --enable-xa --enable-gallium-egl --enable-gallium-gbm --enable-texture-float
make -j4
sudo make install
cd ..

cd glamor
./autogen.sh --prefix=/opt/xorg
make -j4
sudo make install
cd ..

cd xf86-video-ati
./autogen.sh --prefix=/opt/xorg --enable-glamor
make -j4
sudo make install
cd ..

cd xf86-input-evdev
./autogen.sh --prefix=/opt/xorg
make -j4
sudo make install
cd ..

All you have to do now is tell your XDM to use /opt/xorg/bin/Xorg and then restart it. The Xserver puts its log in /opt/xorg/var/log/Xorg.0.log

]]>
2
Patrick McFarland http://adterrasperaspera.com <![CDATA[Small updates to my CD/DVD archival media article]]> http://adterrasperaspera.com/blog/?p=1505 2013-11-12T19:16:01Z 2013-01-11T20:13:59Z Man, it feels funny updating that after so long. I originally wrote it in 2006, so thats 7 years ago.

]]>
1
Patrick McFarland http://adterrasperaspera.com <![CDATA[Firefox can be fast under FGLRX]]> http://adterrasperaspera.com/blog/?p=1496 2013-11-13T10:15:02Z 2012-12-30T05:00:13Z Use about:config to set:

gfx.xrender.enabled = false
layers.acceleration.force-enabled = true

Canvas rendering should be much much faster. Remember to set these to defaults if you switch drivers, the same isn’t true on DRI/Gallium Radeon.

]]>
5
Patrick McFarland http://adterrasperaspera.com <![CDATA[Fixing “Wrong principal in request” in Kerberos 5]]> http://adterrasperaspera.com/blog/?p=1403 2013-11-13T10:15:12Z 2012-11-28T22:38:46Z krb5_newrealm doesn’t seem to add enough lines to /etc/krb5.conf. To fix this, add the following lines to /etc/krb5.conf on all machines participating in the realm. My local realm is LAN, but substitute your own. The new lines will be in bold, the existing lines should already exist, if they don’t, add them.

[realms]
   LAN = {
     kdc = infinity.lan
     admin_server = infinity.lan
     default_domain = lan
   }

[domain_realm]
   .lan = LAN
   lan = LAN

All hosts/servers participating in the realm that offer Kerberized services should have a FQDN that ends in your realm’s domain name (.lan in my case).

]]>
0
Patrick McFarland http://adterrasperaspera.com <![CDATA[Making GTK3 apps try to look more native in XFCE on Debian]]> http://adterrasperaspera.com/blog/?p=1394 2013-11-12T19:08:05Z 2012-09-19T03:22:00Z XFCE is a GTK2 environment, however a lot can be done to improve GTK3 apps on XFCE.

I prefer to use Clearlooks as my GTK2 theme and use the GNOME icon set. On Debian, apt-get install gnome-icon-theme gnome-icon-theme-extras gnome-icon-theme-symbolic clearlooks-phenix-theme and use the XFCE Appearances to change your theme to Clearlooks-Phenix and your icons to GNOME.

If you’re using an XFCE GTK engine theme instead, install gtk3-engines-xfce instead of clearlooks-phenix-theme, and there also is a gtk3-engines-oxygen which provides a native look-alike of KDE4’s Oxygen theme.

You probably should restart your X session after fiddling around with this to fix apps that don’t change themes at runtime properly.

]]>
1
Patrick McFarland http://adterrasperaspera.com <![CDATA[Fixing overly strong LCD sub-pixel filtering on Debian and Ubuntu]]> http://adterrasperaspera.com/blog/?p=1384 2013-11-12T19:39:44Z 2012-09-02T12:57:26Z Some people think the sub-pixel color fringing is too strong when they have sub-pixel anti-aliasing on. If your install is old enough, you might not have the correct symlinks in /etc/fonts/conf.d. Do…
sudo ln -s /usr/share/fontconfig/conf.avail/11-lcdfilter-default.conf /etc/fonts/conf.d/


… and restart X. This should fix the problem.

]]>
0
Patrick McFarland http://adterrasperaspera.com <![CDATA[How to watch NASA TV on Linux]]> http://adterrasperaspera.com/blog/?p=1382 2013-11-12T19:17:48Z 2012-08-06T06:14:38Z First, I’d like to say congratulations to NASA on the landing of Curiosity. It was worth every dollar of that $2 billion.

mplayer 'http://nasa-f.akamaihd.net/public_h264_700@54826' happily plays the stream.

]]>
0
Patrick McFarland http://adterrasperaspera.com <![CDATA[How to send audio between two Linux computers using netcat]]> http://adterrasperaspera.com/blog/?p=1364 2015-01-02T02:53:22Z 2011-08-15T18:32:33Z Apparently there is no dead simple way to send audio from one computer to another in a low(er) latency way.

Can’t beat this, works for any ALSA app that you can change the output for (or just change your default in .asoundrc).

On source computer:
modprobe snd-aloop
arecord -f cd -D hw:Loopback,1,0 | netcat dest 1234
mplayer -ao alsa:device=hw=Loopback.0.0 something.mp3

On destination computer:
netcat -k -l -p 1234 | aplay

Update: Oh, and apparently you can do this in Windows, too.

]]>
0
Patrick McFarland http://adterrasperaspera.com <![CDATA[Linux layer 2 bridging can’t do Firewire]]> http://adterrasperaspera.com/blog/?p=1352 2013-11-12T19:29:15Z 2011-08-05T07:40:32Z Well, it seems and the Linux kernel can’t bridge dissimilar network types, which means I can’t bridge Ethernet and Firewire (workaround until I replace my NIC in my desktop because it fried, having my laptop route for my desktop).

My laptop’s IP is 192.168.2.4
My desktop’s IP is 192.168.2.2
My router’s IP is 192.168.2.1

So, the work around seems to be this…

On the laptop, with eth0 already up:
ifconfig firewire0 up 192.168.2.4
route del -net 192.168.2.0 netmask 255.255.255.0 dev firewire0
route add -host 192.168.2.2 dev firewire0
iptables -F
iptables -P FORWARD ACCEPT
echo 1 > /proc/sys/net/ipv4/conf/all/forwarding

On the desktop:
ifconfig firewire0 up 192.168.2.2
route add default gw 192.168.2.1

To make this permanent, you would edit /etc/networking/interfaces like this…

On the laptop:
auto eth0
iface eth0 inet static
address 192.168.2.4
netmask 255.255.255.0
broadcast 192.168.2.255
gateway 192.168.2.1
post-up ifconfig firewire0 down
post-up ifconfig firewire0 up 192.168.2.4
post-up route del -net 192.168.2.0 netmask 255.255.255.0 dev firewire0
post-up route add -host 192.168.2.2 dev firewire0
post-up iptables -F
post-up iptables -P FORWARD ACCEPT
post-up echo 1 > /proc/sys/net/ipv4/conf/all/forwarding

On the desktop:
auto eth0
iface firewire0 inet static
address 192.168.2.2
netmask 255.255.255.0
broadcast 192.168.2.255
gateway 192.168.2.1

]]>
0
Patrick McFarland http://adterrasperaspera.com <![CDATA[USB 3.0 works under Linux]]> http://adterrasperaspera.com/blog/?p=1317 2013-11-12T18:51:20Z 2010-08-22T08:22:55Z I decided that I needed a real backup solution, even though I have a RAID 5 for file storage in my workstation; maintaining a backup of a 2TB array is a pain in the ass if all you have is blank DVDs.

So, I purchased a Vantec NexStar 3 SuperSpeed (NST-380S3) enclosure, a Samsung EcoGreen F3EG 2TB 5400rpm (HD203WI) drive, and a USB 3 PCI-E controller.

It seems the only shipping USB host controllers the moment all use NEC’s USB 3.0 chip, and almost all the PCI-E boards look alike. They all seem to run in the $25-45 range. The great part is Linux supports NEC’s controller as of 2.6.31. The controller worked with no configuration soon as I put the card in.

I chose that specific Samsung drive because it seems to be the only sane 5400 rpm 2TB drive out there. The only other choices were Seagate’s new 5900 rpm drives (which, according to independent reviews on Newegg and enthusiast forums have an unacceptably high failure rate, very unusual of Seagate), and Western Digital’s Caviar Greens (which are 5400 rpm, but suffer from obsessive head parking which is apparently leading to premature drive failure).

Several reviews peg the HD203WI at an average of 90mb/sec writes for sequential writing, or about 2-3x the speed of USB 2.0.

mkfs.ext4 took 7:44 minutes to create the file system (while iotop confirmed it was doing in excess of 100mb/sec writes for much of the process), and hdparm -t /dev/sdx also indicates the drive in this enclosure can push 100mb/sec.

After writing to the drive for an hour straight, the enclosure is warm but not hot, and after removing the drive from the enclosure, the drive itself is warm; this is compared to the Seagate 7200.12s in my RAID 5 array which could burn you at this point.

Many drives fail in enclosures because they overheat; I don’t think this will happen due to Vantec’s thick aluminum design in the NexStar series enclosures, and the fact that the HW203WI has low power usage.

After formatting with ext4, the file system uses 29GB out of 1.82TB total. Its kind of funny when I’ve owned drives smaller than the space consumed by an empty file system.

I’m rather happy with my purchases overall.

]]>
5
Patrick McFarland http://adterrasperaspera.com <![CDATA[Approximate Youtube Bitrates]]> http://adterrasperaspera.com/blog/?p=1184 2013-11-12T19:20:46Z 2010-05-24T14:02:48Z I’ve been wondering what bitrates Youtube produces on files, but they don’t upfront say.

New videos are encoded in eight formats. However, due to bug in Youtube, some 24 fps videos (such as those from film sources) will have duplicate frames inserted to make them 30 fps, causing a very noticeable jitter approximately twice a second.

Format Video Codec Audio Codec Container
37 H.264 1920×1080 24/30 fps AAC 44.1khz Stereo mp4
22 H.264 1280×720 24/30 fps AAC 44.1khz Stereo mp4
35 H.264 854×480 24/30 fps AAC 44.1khz Stereo flv
34 H.264 640×480 24/30 fps AAC 44.1khz Stereo flv
18 H.264, 480×360 24/30 fps AAC 44.1khz Stereo mp4
5 Sorenson Spark, 320×240 24/30 fps MP3 22khz Stereo flv
17 MPEG-4 ASP, 12 fps, black bordered to fit 176×144 frame AAC 22khz Mono mp4
13 H.263+, 15 fps, stretched to full frame 176×144 ignoring source aspect ratio AMR 8khz Mono 3gp

Note: This does not include WebM videos yet as the support is still experimental, and Youtube is not yet encoding videos in 1080p, only 720p (format 45) and 480p (format 43).

Now lets see how a couple high quality videos fair on Youtube.

Format Resolution Video and audio bitrate in kbit/sec
The Dark Knight Trailer 3 1080p, using the Apple version. 2:30 long. H.264, 6ch 48khz AAC audio, 24 fps. Youtube encoded this as a 30 fps video.
Original 1920×816 10518 260
37 1920×816 3427 108.8
22 1280×544 1998 108.8
35 Missing on Youtube

34 640×272 517 95
18 480×204 500 108.5
5 320×136 257 64
17 176×144 55.3 27
13 176×144 55.6 13
Avatar Trailer 1080p, using the Apple version. 3:29 long. H.264, stereo 44.1khz AAC audio, 24 fps.
Original 1920×800 9726 99
37 1920×800 3502 126
22 1280×534 2003 126
35 854×356 806 103.84
34 640×266 554 103.81
18 480×200 486 103.82
5 400×166 255 59
17 176×144 55 28
13 176×144 54 13
Big Buck Bunny 1080p, using the Blender Foundation‘s original version. 9:57 long. Theora, stereo 48khz Vorbis audio, 24 fps.
Original 1920×1080 11902 175
37 1920×1080 3531 125
22 1280×720 2020 125
35 854×480 990 107.9
34 640×360 494 108.02
18 480×270 435 108.03
5 400×226 250 59
17 176×144 55 30
13 176×144 49 12

With these 3 popular HD videos, its easy to tell what sort of bitrate Youtube tries to hit.

Format Approximate bitrate target (video and audio)
37 3.75mbit/sec
22 2.25mbit/sec
35 1.25mbit/sec
34 768kbit/sec
18 768kbit/sec
5 384kbit/sec
17 100kbit/sec
13 75kbit/sec
]]>
17
Patrick McFarland http://adterrasperaspera.com <![CDATA[SLF4J and making JUL shut up]]> http://adterrasperaspera.com/blog/?p=1160 2013-11-13T09:49:10Z 2010-02-15T12:27:58Z I’ve decided to switch to the Simple Logging Facade for Java (SLF4J) plus Logback to bridge java.util.logging (JUL), Log4J, and Apache Commons Logging all into one log output.

Problem is, JUL won’t shut up. Frameworks that log to JUL output the log to the console, and then SLF4J repeats it right after. However, putting this code in before running SLF4BridgeHandler.install() seems to fix it:

java.util.logging.Logger root_logger = java.util.logging.LogManager.getLogManager().getLogger("");

java.util.logging.Handler[] root_handlers = root_logger.getHandlers();

rootLogger.removeHandler(root_handlers[0]);

Now I get one single log output alone.

]]>
0
Patrick McFarland http://adterrasperaspera.com <![CDATA[Ivy and Sun’s Java.net Maven repo]]> http://adterrasperaspera.com/blog/?p=1031 2013-11-13T09:49:22Z 2009-09-24T19:49:05Z I want to use Sun’s Java.net Maven repo with Ivy, and this is not documented well anywhere.

In ivysettings.xml (Ivy will automatically use it) put:


<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE ivysettings>
<ivysettings>
  <settings defaultResolver="chained" />
  <property name="java.net.maven.pattern"
    value="[organisation]/jars/[module]-[revision].[ext]" />
  <resolvers>
    <chain name="chained" returnFirst="true">
      <ibiblio name="ibiblio" m2compatible="true" />
      <ibiblio name="java-net-maven2"
        root="http://download.java.net/maven/2/"
        m2compatible="true" />
      <ibiblio name="java-net-maven1"
        root="http://download.java.net/maven/1/"
        pattern="${java.net.maven.pattern}"
        m2compatible="false" />
    </chain>
  </resolvers>
</ivysettings>

Now you can make an ivy.xml with dependencies like <dependency org="com.sun.grizzly" name="grizzly-http" rev="2.0.0-SNAPSHOT"/> and have it work right.

]]>
1
Patrick McFarland http://adterrasperaspera.com <![CDATA[Secure Glassfish v3 Admin Console]]> http://adterrasperaspera.com/blog/?p=1027 2013-11-12T19:07:41Z 2009-09-18T08:23:03Z Network Config -> Network Listeners -> admin-listener, and edit the IP to state 127.0.0.1, hit save, then […]]]> By default, the admin console can be accessed by the outside world. I prefer to have it accessible to localhost only (so I can ssh tunnel it only).

Open the admin console, and on the menu, Configuration -> Network Config -> Network Listeners -> admin-listener, and edit the IP to state 127.0.0.1, hit save, then restart Glassfish.

]]>
0
Patrick McFarland http://adterrasperaspera.com <![CDATA[Feeding marshalled JAXB data to Jersey]]> http://adterrasperaspera.com/blog/?p=998 2013-11-12T19:33:55Z 2009-08-20T14:55:28Z Although Jersey supports eating JAXB’ed classes fine, sometimes you want to manually alter the data, such as including a processing instruction for XSL stylesheets. There probably should be a less verbose way to do this.

The object should be an @XMLRootElement annotated object.

@GET
@Produces("application/xml")
public static StreamingOutput outputXMLwithXSL() {
  return new StreamingOutput() {
    public void write(OutputStream output) throws IOException,
    WebApplicationException {
      Object object = yourJAXBObject();

      JAXBContext jc = null;
      try { jc = JAXBContext.newInstance(object.getClass()); } 
      catch (JAXBException e) { e.printStackTrace(); }

      Marshaller m = null;
      try { m = jc.createMarshaller(); }
      catch (JAXBException e) { e.printStackTrace(); }

      PrintStream ps = new PrintStream(output);
      ps.println("<?xml version=\"1.0\" encoding=\"UTF-8\"?>");
      ps.println("<?xml-stylesheet type=\"text/xsl\" href=\"your.xsl\"?>"); 

      try { m.setProperty(Marshaller.JAXB_FRAGMENT, true); }
      catch (PropertyException e) { e.printStackTrace(); }

      try { m.marshal(object, output); }
      catch (JAXBException e) { e.printStackTrace(); }
    }
  };
}
]]>
0
Patrick McFarland http://adterrasperaspera.com <![CDATA[EclipseLink JPA in Eclipse dumb error message]]> http://adterrasperaspera.com/blog/?p=981 2013-11-12T19:06:52Z 2009-08-12T01:02:17Z Preferences -> Validation, JPA Validator, turn off for Build. This probably shouldn’t be on by default anyhow, people are most likely going to build new apps […]]]> Sometimes you’re developing an app along with a new database schema to go with it, but you get this: Schema "null" cannot be resolved for table "XXXX".

Window -> Preferences -> Validation, JPA Validator, turn off for Build.

This probably shouldn’t be on by default anyhow, people are most likely going to build new apps from scratch than build new apps to fit old databases; and even if they do build from old, Eclipse’s JPA Tools has a build entities from tables function.

]]>
3
Patrick McFarland http://adterrasperaspera.com <![CDATA[Here Men From The Planet Earth First Set Foot Upon the Moon, July 1969 A.D. We Came in Peace For All Mankind]]> http://adterrasperaspera.com/blog/?p=960 2013-11-12T19:20:27Z 2009-07-21T12:50:11Z Here Men From The Planet Earth First Set Foot Upon the Moon, July 1969 A.D. We Came in Peace For All Mankind

]]>
0
Patrick McFarland http://adterrasperaspera.com <![CDATA[Dealing with SSH’s key spam problem]]> http://adterrasperaspera.com/blog/?p=896 2013-11-13T09:50:48Z 2009-03-15T19:44:12Z Recently I created a new virtual machine locally, and I tried to ssh into it.

[diablo@infinity ~]$ ssh tachikoma
Received disconnect from tachikoma: 2: Too many authentication
failures for diablo
[diablo@infinity ~]$

I didn’t put a key on tachikoma yet, and ssh didn’t ask me my password. It didn’t make any sense.

So, I ran the same command with -vvv and realized… its sending all my identity keys to tachikoma, and the sshd on that machine is kicking the connection due to all of them failing.

What bizarre behavior.

So I dug around in the man page for ~/.ssh/config, ssh_config and noticed I can just add…

host *
IdentitiesOnly yes

… to force ssh to only use specifically named identities which (what I’ve been doing for years, anyways) are written like this…

host some.remote.host.com
IdentityFile ~/.ssh/id_rsa_some.remote.host.com

… or something similar. With the IdentitiesOnly directive in there, it only sends specifically the identity keys I specify with IdentityFile instead of spamming all the keys I have.

I’m not sure if this is a Debian-only problem (both infinity and tachikoma are Debian machines), but even though its a security feature, its kind of annoying.

]]>
0
Patrick McFarland http://adterrasperaspera.com <![CDATA[Evil solution to the XSLT empty xmlns probem]]> http://adterrasperaspera.com/blog/?p=892 2013-11-13T09:51:04Z 2009-03-03T13:15:29Z I’m currently using XSLT, and I’ve come across the dreaded empty xmlns problem. My XML contains elements that do not have rules in my XSL stylesheet and most XSLT engines append the attribute xmlns="" when it gets confused about what namespace the element belongs to… I get bitten by this because I do not have an input DTD as the document is not meant to be used as anything but fodder to create XHTML.

Many people even have this problem when using XSLT to transform XHTML into XHTML… the input and output namespaces are the same, and they’re using properly formated and validated XHTML documents (complete with the doctype statement and the xmlns attribute on the html element).

Many people just want to force the transforming engine to blindly copy the elements as is over to the new document, and ignore the namespace issue. The below XSL should do this, use wisely.

<xsl:template match="*|@*">
    <xsl:element name="{ local-name( . ) }">
        <xsl:apply-templates select="@*|node()"/>
    </xsl:element>
</xsl:template>

So, now I can go use XHTML in my input and have it spit back out unmolested.

]]>
5
Patrick McFarland http://adterrasperaspera.com <![CDATA[Random Perl: How to check if something is a number]]> http://adterrasperaspera.com/blog/?p=751 2013-11-13T09:51:22Z 2008-11-01T03:51:22Z Perl has no built in function or sub to test if a variable is a number or not. Scalar::Util makes it easy, and is a core module as well.


use Scalar::Util qw(looks_like_number);

my $number = 192;
my $string = '123foobarbazquux123';

if(looks_like_number($number)) {
  print "$number is a number!\n";
}

if(!looks_like_number($string)) {
  print "$string is not a number!\n";
}

Tada!

]]>
2
Patrick McFarland http://adterrasperaspera.com <![CDATA[Solid state society: The future of common data storage]]> http://adterrasperaspera.com/blog/2007/06/29/solid-state-society-the-future-of-common-data-storage/ 2013-11-12T18:51:35Z 2007-06-30T03:24:46Z Fifty-one years ago, IBM did something amazing, something that changed the world and kick-started the computing revolution twenty years before Intel and Apple and Microsoft and everyone else declared they were open for business: IBM invented the hard drive.

A monster of a machine, a behemoth, one ton of spinning metal the size of a fridge held exactly five megabytes via 50 two foot platters and a bunch of controller hardware and buffer memory. This hard drive was the first of it’s kind, and helped spawn an entire industry of data storage; not only was it faster and easier to use and maintain compared to tape media, it was also expensive and only a few companies could afford this.

The technology over the next few years shrank and increased in performance, and stories of “wash machines” dancing across the data center were well known. More and more companies started buying them to replace or supplement their tape drives, and eventually tape died out in the commercial sector.

Eventually, the three or four home computing revolutions come and go, and the two portable device revolutions come and go. Wash machines become small external units, those external drives become internal (5.25″ full height), and then they become smaller (3.5″) and smaller (2.5″) and smaller (1.8″) yet. Megabytes become gigabytes become tens and hundreds of gigabytes and finally, as of a few months ago, terabytes.

All of this technology ultimately works the same way: spinning platters with magnetic heads reading what an IBM engineer once named “magnetic milkshake.” The one single major flaw in this design is that anything that moves will eventually break down. Spinning drives slower won’t decrease the wear and tear, and neither will cooling them; and new bearing designs? They decrease noise and some wear and tear, but do not prevent mechanical failure.

We’ve invented new technologies, such as redundant arrays of inexpensive disks (RAID) to both increase performance and decrease the chances of mechanical failure eating your data. A suitably sized RAID 6 array can have two drive failures before you risk data loss. An array of, say, six to ten drives for such an array is also huge and outside the realm of most people; and I haven’t seen Apple issue iPods with RAID arrays yet.

In addition to all of this, the magnetic heads have to move across the platter to read and write specific areas, which increase the time it takes to read random data (sequentially read data suffers from this less). If mechanical failure was the major issue of this design, seek times is the secondary issue.

In 1984, a Dr. Fujio Masuoka invented flash memory: a non-volatile memory that can be used as data storage in the same way you’d use tape or hard drives, and flash has no moving parts nor does it use large amounts of power like hard drives do because of spinning platters. You see flash everywhere now, in your cell phones, in your digital cameras, in your hand held game systems, and also in your Wiis. We call drives built out of this technology: solid state drives.

Laptops are now the key target: laptops never have enough power, and battery technology is not keeping pace with our advancements with other technology, and until Santa Rosa more than 3 hour battery life under normal conditions on most laptops was impossible… now it’s simply medium difficulty. Flash technology now has gotten very interesting due to the fact everyone from laptop manufacturers to silent computing aficionados to even the enterprise sector wants flash tech to replace their spinning milkshakes.

]]>
0
Patrick McFarland http://adterrasperaspera.com <![CDATA[Why Powered USB Is Needed, Part 3: USB 3?]]> http://adterrasperaspera.com/blog/2007/04/02/why-powered-usb-is-needed-part-3-usb-3/ 2013-11-12T19:19:52Z 2007-04-02T09:47:17Z This article describes a version of USB that is not related to the new USB 3 spec that Intel has released for 2010 products

I originally planned the Powered USB article as two parts, one explaining why USB took off, and another explaining why USB isn’t the best solution because it can’t power large devices plus why Powered USB isn’t the greatest solution either because it isn’t in consumer electronics yet and has the different plugs for different voltages issue as well.

What I didn’t plan on was all the Firewire fans popping up and saying I was wrong for pushing a Powered USB/USB 3 combo. For the record, I’m also a Firewire fan but haven’t gone to the fanatical levels some people have. Part 3 is for you guys.

Note: I originally intended for USB 3 and New Powered USB to be separate standards, allowing devices to use one or both (but New Powered USB would require USB 3 to negotiate for power usage). The way I will describe this possible future USB 3 in part 3 is basically folding the new data features into the New Powered USB part of the plug to remain compatible with USB 2 hosts/devices/hubs.

The problem with traditional USB is because:

  1. It’s slow.
  2. Can’t allow devices to perform the DMA-like transfer method Firewire does1.
  3. It’s slow.
  4. Uses polling to transfer data, thus eats CPU time like mad.
  5. It’s slow.
  6. Future USB specifications cannot perform interrupts to signal for the host to acquire new data, and can only use polling.
  7. It’s slow.

Even with these problems, I can still say future USB products look promising because the reason I chose a formalized New Powered USB specification combined with a future USB 3 specification is because almost everyone has USB ports, and it has become the ubiquitous peripheral port on all sorts of devices. Not all devices come in Firewire versions, and not all computers have Firewire ports.

The Proposed USB 3.0 Specification Checklist

USB is being held back by the fact it can’t perform interrupts. USB 3 cannot just add it in the normal USB host interface stack: USB was never designed for it, and you can’t just add it as a feature that can be negotiated between host and client. It wouldn’t work.

However, what would work is adding a second new interface that is slaved to the first: allow USB 3 devices to negotiate to use this new second interface, and allow the second interface to work independently of the original legacy interface. This secondary interface could not only perform interrupts, but be able to do anything USB 2 is missing.

There are two reasons you need the independency: one, USB 3 hubs need to be able to transfer data from both legacy USB 2 devices and USB 3 devices (all the USB 2 data will be going across the legacy bus independently of USB 3 data); and two, USB 3 devices will no longer be using the legacy interface for any data transferring once the negotiation is complete. The legacy interface will act as an out-of-band interface for non-important USB-related traffic (such as for re-negotiation, or for negotiation of/for USB 3 clients plugged into USB 3 hubs, or for just telling the host it’s still plugged in).

This second interface, since it is now independent, can virtually use any protocol it wishes. If the USB Working Group so decided, they could run Firewire unmodified over this second interface. The possibilities are endless. Most likely, it’d be some USB-IF designed Firewire clone.

There is something else I need to add to this checklist: USB 2 legacy communicaiton with USB 3 hosts and clients. As I said, USB 3 devices would have to negotiate to use the secondary interface, but what happens if either the host, client, or hub rejects this?

USB 3 clients will have to be able to do their work as standard USB 2 devices. Obviously, high bandwidth applications would run a lot slower, DV devices might not be able to be used in real-time, and Powered USB power features couldn’t be negotiated for (most likely the USB 2 client/host/hub would be using a normal USB plug in the first place). USB 3 hubs connected to USB 2 hosts would have to reject USB 3 connections as well and tell any USB 3 clients connected to the hub to run in USB 2 mode.

More pins doesn’t exactly mean a new data connector

Of course, this second interface requires more pins, yet has to stay compatible with the old USB plug. Frankly, doing what Firewire 800 did (adding a 9-pin socket/plug that requires an adapter to plug 6-pin devices/plug into 6-pin hosts) is possibly a bad idea as it requires people to have yet another small part that is easy to lose. Adapter dongles and short adapter cables suck.

The idea I’ve been playing around with is to attach these new pins to the consumer-friendly New Powered USB plug I hinted at in part 2 of this article. Just add, say, four or six pins to the outside of the plug’s inner column like how Type B USB plugs work now, while leaving the power pins on the inside of the column like the are in the original design. Remember, these pins are for data only, all three Firewire plug designs only use two pairs for data.

However the plug actually gets designed, I don’t care; it just has to be done in a way that doesn’t interfere with the legacy USB plug. Putting the new data pins in the New Powered USB half of the connector seems to be the ideal way of solving this without going Firewire’s route.

So thats it? Just clone Firewire’s features?

The two main things that Firewire has over USB at this time is the fact that Firewire devices can use interrupts (which, if you haven’t figured it out yet, is causing Firewire 400 to actually hit 400mbps, and USB2 to hit about half of it’s 480mbps), and the fact that Firewire can power devices that are more power hungy, is why, in theory, Firewire is the better bus.

But as I said before, USB is the defacto standard for peripheral communication, and Firewire is in far fewer devices, and some computers don’t even have Firewire ports. As much as I’d like to see Firewire kill USB, it is not going to happen any time soon.

So yes, what I’m saying is USB 3’s secondary interface has to either copy Firewire’s features or use Firewire directly. Firewire can’t kill USB, USB is having a hard time killing Firewire, so the only way I see this problem being solved is by allowing USB 3 to do everything Firewire does now while still remaining compatible with USB 2 devices and hosts.

[1]: On some platforms it really does turn into a PCI DMA and allows you to read/write part or all of the system memory, such as for live debugging purposes on another machine. I’d like to see this on USB 3 as well.

]]>
14