tag:blogger.com,1999:blog-50679045711399057552024-03-05T21:31:04.130-05:00PataMetaDatathat which is above that which is after dataAnonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.comBlogger65125tag:blogger.com,1999:blog-5067904571139905755.post-59500787556421801462015-11-02T19:28:00.000-05:002015-11-02T19:28:54.891-05:00The Case for Net Damage Jinteki at Worlds 2015<p>First of what will probably be a slew of Netrunner posts. I think about the game way too much & don't have enough people to blabber to.</p>
<hr>
<p>The Personal Evolution "death by a thousands cuts" deck was the first Netrunner archetype I ever really fell in love with. Starting with core set & the first two big boxes, I stuffed Mushin No Shin, Gila Hands Arcology, House of Knives, Archer, & a bunch of traps in a deck & was immediately happy with the results. The deck fell out of my favor after the release of Order & Chaos; Shapers were using Feedback Filter, I've Had Worse was a <em>great</em> counter packed 3x in every Anarch deck, & the Eater-Keyhole shenanigans of the time were also a tough matchup. But I'm always looking for an opportunity for the resurgence of Jinteki net damage decks in the meta, from new archetypes like Chronos Protocol control to pieces that bolster Personal Evolution such as Lockdown & Back Channels. Just as Minh's Personal Evolution caught the meta off-guard at last year's Worlds & placed second, I think we're primed for another left-field Jinteki deck (<em>not</em> glacier or rush RP!).</p>
<h4 id="general-metamovements">General Metamovements</h4>
<p>Disclaimer: only-partially-informed opinions of a tier two player. I'm hardly the best person to be making these calls, but damn if I don't have some ideas.</p>
<p>The top-tier corp decks at the moment are: glacier (with Caprice Nisei) or fast advance Engineering the Future with Team Sponsorship, NEH fastrobiotics, & NBN kill decks (whether a traditional Butchershop build out of NEH or newer 24/7 kill decks out of Haarpsichord Studios & other new IDs). In response, runner decks typically need to do a few things: pack meat damage protection (typically Plascrete Carapace), prepare to be tagged possibly including a tag-me mode, & go <em>fast</em>. None of these tactics are effective against slow, grindy net damage decks.</p>
<p>First off, drop the 1 or 2x Scorched Earth in your Personal Evolution lists. It will only land if the runner has Plascrete installed now—but if you remove the meat damage, you'll still see runners waste a click & 3 credits installing Plascrete. Fast decks, whether aggressive-running Criminals or Wyld-pancake Anarchs, have to abandon their game plan against loads of damage thinning out their deck.</p>
<p>Finally, Faust has become an enormously popular breaker; it's in nearly every Anarch list & creeping into some Shaper & Criminal (mostly Gabe & Leela) lists. But it's <em>terrible</em> against net damage, only racing the corp towards their win condition.</p>
<p>What's <em>bad</em> about the meta right now for Jinteki? Film Critic & recursion. Runners are packing heaps of recursion & the stock of viruses like Parasite & Imp has never been higher. All of these can really take the teeth out of traditional Jinteki lists; Film Critic steals your 2-of Future Perfects in Cambridge Personal Evolution with ease & negates the upside of Fetal AI. Imp can take out Neural EMP or the aforementioned agendas. Knocking breakers from the runner's Grip is far less powerful if they can snatch them back from the Heap in real-time with Clone Chip.</p>
<p>The Order & Chaos counters mentioned in my opening are still around, in particular IHW. But Keyhole decks have fallen out of favor a bit. I'd expect some very good MaxX or Valencia Keyhole decks at Worlds, but I'm still not convinced the archetype is strong enough to worry about.</p>
<h4 id="matchups">Matchups</h4>
<p>I see three dominant runners in the meta: Prepaid Kate, Noise, & circa-2013 Andromeda lists. The Andromeda choice is definitely conjecture; I expect to see far more Andromeda at Worlds than we have seen over the year, simply as a reaction to how strong NEH fast advance is. People perceive Andysucker to have a strong fast advance matchup, despite the lack of Clot, & will probably turn to her, but not in the Stealth Andy versions that were developed to beat glacier Replicating Perfection.</p>
<h5 id="andromeda">Andromeda</h5>
<p>I've always felt that net damage decks have a good matchup against aggressive, fast-paced criminals. Criminals like to run & are geared to prevent <em>credit</em> taxation with tools like Desperado, Security Testing, Bank Job, & (splashed) Datasucker. But they still don't have great in-faction card draw. Fisk Investment Seminar & Drug Dealer changed this a bit, but ultimately Criminals still fall behind Shaper's draw events & Anarch's many options here. FIS & Drug Dealer are also still unproven; I think top tier players may hesitate before including cards with such obvious downsides & stick to more traditional Andromeda lists. The decks which these (decent) cards excel in are tier three decks (Laramy Fisk mill/hand bloat, Ian Stirling connections control).</p>
<p>To mention it again, Minh's second place at last year's Worlds largely demonstrates how great the Andromeda matchup is. I recall that one of his only Corp losses in the Swiss was to Spags' Prepaid Kate, while the lack of recursion of most Andromeda lists was simply no match for the amount of damage Personal Evolution threatens. This year, I'd expect every Andromeda list (perhaps every deck list, actually) to have at least one Clone Chip. Zero recursion simply isn't a viable choice anymore with the amount of program trashing available to Corps.</p>
<p>All this said, Account Siphon remains one of the best counters to Jinteki's traps. Controlling the Corp's credits is often the only way to safely check remote servers. Packing a Crisium Grid—also helpful in the Keyhole matchup—might be called for.</p>
<h5 id="kate">Kate</h5>
<p>Kate is the toughest matchup for any Corp right now & Jinteki is no exception. The reason why is a bit different—Kate's typical win condition of multi-access R&D lock isn't viable against traps. Instead, it's the inclusion of Levy AR Lab access & heaps of recursion (not only 3x Clone Chip, but sometimes Scavenge as well) that make Kate difficult. Still, there are ways in which net damage takes Kate out of her comfort zone & negates her strongest attributes. The very strong economy of Prepaid Kate matters much less when <em>cards</em> are the point of taxation. Clot is a wasted card slot. Because net damage has fallen out of favor, almost every Kate list cut Deus Ex & Feedback Filter. Remember that <em>those cards were in there originally to solve a touch matchup</em>! Traps are problematic & Kate's propensity to play cards for economy (as opposed to persistent resource-based economy like Kati Jones & Security Testing) & lack of spare influence for I've Had Worse necessitates very careful play on the runner's side.</p>
<h5 id="noise">Noise</h5>
<p>Noise is not necessarily an easy matchup, but one which Jinteki has the perfect tech for in Shock!, Crick, & Cerebral Static. Noise is the biggest pain against the traditional Personal Evolution shell game, since he can mill rapidly once set up & requires very few breakers to put up huge amounts of pressure. But Industrial Genomics ability (& that the lists almost always include Shock! & Crick) is an incredible hard counter while Chronos Protocol has a decent matchup as well. I think the only adjustments that need to be made for the Noise-heavy meta are packing a Cyberdex Virus Suite or 2 & maybe swapping one Snare! for a Shock!. Noise won't have Film Critic & some lists have even cut I've Had Worse in an attempt to cut down on events (to benefit Street Peddler). They tend to burn through their deck at an incredible rate, with Peddler & Wyldside leading the way, which means the long game isn't necessarily to Noise's advantage.</p>
<h4 id="exit-strategy">Exit Strategy</h4>
<p>Looking at the <a href="http://stimhack.com/tournament-decklists/">Stimhack Tournmanent-winning decklists</a>, there haven't been a lot of Jinteki lists lately. The most noticeable victory of late was Daryl Russell taking down the Australian nationals with an interesting (no House of Knives! no Hedge Fund! 1x Profiteering! 1x Chairman Hiro!) <a href="http://stimhack.com/national-sydney-australia-74-players/">Personal Evolution list</a>. That doesn't necessarily mean Jinteki is poorly positioned, just that they're not the focal point of the meta. I wouldn't be pretty surprised if more than half the Corp decks at Worlds are NBN. With all that fast advance and tagging, some Philotic Entanglement (with 24/7 News Cycle?!?) might make an impact.</p>Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0tag:blogger.com,1999:blog-5067904571139905755.post-59190586464075258942014-11-24T15:37:00.000-05:002014-11-24T15:37:58.281-05:00A *NIX Use CaseGist of this post with nicer formatting: <a href="https://gist.github.com/phette23/a71248765c0f0cfeddd7">https://gist.github.com/phette23/a71248765c0f0cfeddd7</a><br />
<br />
<hr />
Almost immediately after declaring a hiatus seems like a great time for a blog post.<br />
Inspired by nina de jesus and Ruth Tillman's <a href="https://github.com/satifice/libtech_level-up" rel="noreferrer">libtech level up</a> project, here's something on the value of command-line text processing. Some of these common UNIX tools that have been around since practically the 1980s are great for the sort of data wrangling that many librarians find themselves doing, whether their responsibilities lie with systems, the web, metadata, or other areas. But the command prompt has a learning curve and if you already use text editor tools to accomplish some tasks, it might be tough to see why you should invest in learning. Here's one case I've found.<br />
Scenario: our digital repository needs to maintain several vocabularies of faculty who teach in different departments. That information is, of course, within a siloed vendor product that has no viable APIs. I'm only able to export CSVs that looks like this:<br />
"Namerer, Name","username"
"Othernamerer, Othername", "anotherusername"<br />
But to import them into our repository I need to clean up the data a little and put it into a slightly different format:<br />
"Namerer, Name","facultyID","username"
"Othernamerer, Othername","facultyID","anotherusername"<br />
This single-line shell script is all I need:<br />
<pre><code>#!/usr/bin/env bash
cat $1 | sort | uniq | sed -e '/"STANDBY",""/d' -e 's|, Staff"|"|' -e 's|, "|"|' -e 's|","|","facultyID","|'
</code></pre>
Let's walk through the script. To make it, I put the above text in a file, named it something like "fac-csv.sh", and made it executable by running <code>chmod +x fac-csv.sh</code>. I won't go into permssions but <code>chmod +x</code>, and the paragraph below, aren't even strictly necessary, since one can type <code>bash fac-csv.sh</code> to run the script anyways.<br />
<code>#!/usr/bin/env bash</code> tells the operating system what program to execute the script with. A lot of scripts list a path direct to the program, e.g. <code>#!/usr/bin/python</code> (for a python script) or <code>#!/bin/sh</code> (for a shell script). Using <code>#!/usr/bin/env</code> is just a bit more portable across systems; the <code>env</code> command looks in the *env*ironment for a given program, searching several possible locations, so if someone on a different system (one where the shell is in, say, <code>/usr/bin/local/bash</code>) executes the script it'll still work.<br />
<code>cat $1</code> prints out the full text file I want to operate on (a CSV, in this case) so I can start piping it through the processing steps. On the command line, I run this script like <code>fac-csv.sh filename.csv</code> and <code>filename.csv</code> becomes <code>$1</code> (the first positional parameter) inside the script.<br />
The pipes ("<code>|</code>") separating each command chain them together, making the input of one command the output of the last. This is perhaps the most powerful part of UNIX since it means almost arbitrarily complex operations can be composed of smaller ones.<br />
<code>sort</code> takes the CSV, which might be in any order, and sorts the lines alphabetically.<br />
<code>uniq</code> takes duplicate adjacent lines and removes them, thus only *uniq*ue lines are left. This step wouldn't work without the <code>sort</code> prior.<br />
<code>sed</code> stands for <code>s</code>tream <code>ed</code>itor, it takes the text passed to it and performs a series of edits, each edit is specified with an <code>-e</code> flag. We've already deduplicated the file, <code>sed</code> cleans it up. <code>sed</code> has a lot of edit types but I'm only using two; delete line and substitute.<br />
<code>'/"STANDBY",""/d'</code> is a delete line command, which looks like <code>/pattern/d</code>. So here I'm saying "delete all lines that match the pattern <code>"STANDBY",""</code> since "STANDBY" is an artifact of our data system and not a faculty name we need to be recording.<br />
The substitute commands look like: 1) the letter "s", 2) a delimiter (I've used "|" but other common choices include colons or forward slashes, in general you just want a separator that won't appear in your pattern since that complicates things), 3) a pattern to substitute, and 4) want to substitute for the pattern.<br />
<code>'s|, Staff"|"|'</code> finds <code>, Staff"</code> and deletes the comma-space-Staff part (note the quotation mark is retained).<br />
<code>'s|, "|"|'</code> finds <code>, "</code> and deletes the comma-space, leaving the quotation mark again. This and the step above clean up entries like "Sname, Gname, Staff","sgname, " => "Sname, Gname","sgname"<br />
<code>'s|","|","facultyID","|'</code> adds in a second "facultyID" value in each CSV row, which our repository needs for reasons.<br />
In the end I've: deduplicated the export, deleted useless lines, and cleaned up messy lines. I find occaisions to run this script or a slight modification of it weekly. Doing the same steps in a text editor would be far more time-consuming and error prone (since I might forget one, not do them in right order, etc.).<br />
Maybe this came out Greek, if so I apologize. It took me a long time to learn about all these steps, in particular <code>sed</code> has caused me much trouble. But now I'm able to write these quick, one-line scripts that automate what would've been several steps in a text editor.Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0tag:blogger.com,1999:blog-5067904571139905755.post-53043735900075707882014-11-22T17:52:00.006-05:002014-11-22T17:52:58.308-05:00Hit the Pause ButtonJust an FYI that this blog is going to go dormant for a while as I'm trying to be better about focusing my responsibilities. I'm a little overwhelmed at the moment, as the last post may have indicated, and cutting back my personal blog makes sense given what else I'm doing.<br />
<br />
I'll still be around the interwebs though. <a href="https://twitter.com/phette23">Twitter</a>, <a href="http://acrl.ala.org/techconnect/">Tech Connect</a> , and <a href="https://github.com/phette23">GitHub</a> are good places to find me.Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0tag:blogger.com,1999:blog-5067904571139905755.post-14513891710008148612014-09-21T05:54:00.004-04:002014-09-21T05:54:44.236-04:00Better to Burn Out than to Fade AwayExtra-professional obligations of mine:<br />
<br />
<ul>
<li>I edit a column for the <i><a href="http://rusa.metapress.com/home/main.mpx">RUSQ</a> </i>journal, "Accidental Technologist". I'm proud of the columns I've published, but I've only written a couple. I identify topics, authors, read drafts, & provide feedback 4 times a year.</li>
<li>I write (quasi-)monthly blog posts for <a href="http://acrl.ala.org/techconnect/?author_name=phette23">ACRL Tech Connect</a>. Again, I'm proud of my posts. I also provide feedback for <a href="http://acrl.ala.org/techconnect/?page_id=72">my excellent co-authors</a> who mostly tolerate my nagging.</li>
<li>I'm on the LITA Forum Coordinating Committee. It's in Albuquerque this year & it's going to be great! Seriously. I'm excited about <a href="http://www.ala.org/lita/conferences/forum/2014/keynote">the keynotes</a> & Forum has proven to be a great event to meet like-minded library technology folks.</li>
<li>I'm on the Code4Lib 2015 Keynotes Committee. We're still accepting <a href="http://wiki.code4lib.org/2015_Invited_Speakers_Nominations">nominations for keynote speakers</a>!</li>
<li>I want to organize more <a href="https://groups.google.com/forum/#!forum/code4lib-norcal">Code4Lib NorCal</a> meetups, which is the most neglected item on this list. If you're a C4L NorCal person, I promise you'll be seeing messages from me soon.</li>
<li>I'm juggling dozens of open source projects <a href="https://github.com/phette23?tab=repositories">on GitHub</a>, most of which suffer from benign neglect & could use some code & love. I just cannot help myself from jumping into new projects even when I clearly cannot commit enough. <a href="https://github.com/phette23/wdpla-ext">WikipeDPLA</a> is my focal point at the moment but I've created about a half-dozen repos since publishing that & maybe I should just do <i>one</i> project at <i>once</i>.</li>
</ul>
<div>
To reiterate: these are all outside of my librarian position & while I do spend the occasional hour or two on them at work, for the most part I complete tasks outside of my 9-to-5. I'm can't get tenure, I just can't say "no". & I'm undoubtedly privileged; these are extra-<i>professional</i> commitments that aid my status in the profession, whereas others have extra-professional commitments oriented elsewhere. They can't put them in tenure dossiers, as unfair as that is.</div>
<div>
<br /></div>
<div>
But <i>how</i>? How can I <i>continue</i>? I find value in all of these bullet points, so how do I decide to say "no" to any of them? I know others are faced with similar struggles & I'm asking for advice. How do <i>you</i> do it all? There are so many people in libraryland who seem in a similar situation, I could name names but I'd leave someone out. I don't know how they do it, so much in such finite time periods.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
Let's all take a breather. No one work for the next week. Let us catch up instead.</div>
Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com2tag:blogger.com,1999:blog-5067904571139905755.post-40681903691276743702014-08-31T01:00:00.002-04:002014-08-31T01:06:51.456-04:00Switching to Fish ShellI started using <a href="http://fishshell.com/">Fish</a> as my primary shell a few months ago. While I like Bash, the promise of a more modern shell intrigued me. I spend entirely too much time on the command line. My affinity for Bash has less to do with its features as a language or shell than with the UNIX philosophy of many small programs which play nicely together.<br />
<br />
Fish jokingly bills itself as "a command line shell for the 90s". It isn't revolutionizing what a shell does, rather it starts from a strong <a href="http://fishshell.com/docs/current/design.html">design document</a> to provide a better experience. If you're unclear on the difference between a shell, terminal emulator, Bash, & command line interface, try Bryan J. Brown's <a href="http://www.bryanjbrown.com/2013/06/demystifying-shell-pt-i.html">description on his blog</a>.<br />
<br />
<h4 id="whats-good-with-fish">
What's Good with Fish</h4>
Why would I switch to Fish? Immediately after trying it out, a few advantages were apparent. I didn't even have to consult help documentation.<br />
<br />
<strong>Discovery</strong> is where Fish shines. I discovered new, useful programs on Mac OS simply by <kbd>tab</kbd>ing through available completions. Fish's completion is incredibly smart & detailed; it knows files, commands, variables, & flags. Bash does this too, but Fish is far superior & comes with a huge collection of completions for common programs. It's main advantage is that it'll show <em>options</em>, so the completion is exploratory, whereas in other shells the completion is a just convenience for people who already know what they're looking for. Fish shows the <em>definition</em> of a particular flag, function, or program—as well as the current value of variables—instead of merely showing that they exist.<br />
<br />
Many of the tools I use have dozens of flags. I love them, but I can't memorize each flag for every one. Take <a href="http://beyondgrep.com/">Ack</a> for example. I usually just add a flag for the programming language I'm searching (e.g. <code>--js</code>) & the string I'm looking for. But the other day I wanted to see the number of matches in each of the large list of files I was searching. Now, I know ack can do this, but I don't know what flag(s) I need. Typically, I'd need to open up ack's man page, search through it, close it, & then run the command. With Fish, I typed a couple dashes, then <kbd>tab</kbd> to see all its completions, spotted <code>--count</code> right away, & ran the command without leaving my current context.<br />
<br />
Another nice advantage of Fish's completion; it learns from previously typed commands. So even if there's no custom-built completions for a particular program, Fish learns how you use it & develops completions over time.<br />
<br />
Fish also has <strong>colors</strong>! Nice ones! They pop more than I'm used to. What's more, the shell provides convenient abstractions for changing colors. The <a href="http://fishshell.com/docs/current/commands.html#set_color"><code>set_color</code></a> command lets you use natural language like "red" rather than the crazy looking <code>echo \033[1;33m</code> (yes, this is actually how you change colors in Bash). <code>set_color</code> is handy, but Fish also has added features like <code>prompt_pwd</code>, which is great for shortening the working directory for inclusion in a prompt.<br />
<br />
If you don't want to spend hours configuring a custom prompt, Fish comes with a couple dozen nice ones built-in. You can run <a href="http://fishshell.com/docs/current/commands.html#fish_config"><code>**fish_config**</code></a> to open a configuration interface in a web browser which gives you copy-pastable prompt code. This config feature makes it super quick to get started without a ton of research & looking up replacement tokens. Every shell should have such a feature.<br />
<br />
<strong>Scripting</strong> in Fish is far more straightforward, as the shell's language is minimal & clean. It looks Ruby-esque & favors natural language everywhere over strange, punctuated incantations. Because it's a smaller & more rationale language, learning the basics of Fish scripting is quicker than with other shells.<br />
<br />
Fish also has wonderful <strong>error messages</strong>, perhaps the best of any programming language I've dealt with. That may not seem valuable but it helps immensely with learning the shell, especially when transitioning from Bash. Fish will not only point to the erroneous character, but will note common mistakes & try to guess what you missed. For instance, in Bash a subshell is launched with <code>$(…)</code> whereas Fish uses <code>()</code>; the $ in Fish means one & only one thing, that a variable is being used. So when you use a $ in the wrong context, it says so. An example:<br />
<br />
<pre><code>> echo $(whoami)
fish: Did you mean (COMMAND)? In fish, the '$' character is only used for accessing variables. To learn more about command substitution in fish, type 'help expand-command-substitution'.
echo $(whoami)
^</code></pre>
<br />
Fish is half written in its own scripting language, so it's easy to see how some features work & extend them. I noticed that there weren't any completions for Node & NPM, so <a href="https://github.com/fish-shell/fish-shell/pull/1566">I added them myself</a> by aping existing ones. Exposing so much of the shell's core functionality makes it customizable & approachable.<br />
<br />
<h4 id="annoyances">
Annoyances</h4>
In a way, Fish is the perfect shell for someone just getting starting at the command line because of its brilliant completions, easy (no code!) configurability, & sane scripting language. Unfortunately, for me, it's not quite perfect because I'm already used to Bash's quirky parts & rely on numerous packages, settings, & scripts that assume a more common (read: Bash) environment.<br />
<br />
Example: <a href="https://github.com/rupa/z">z</a>. Z is a <em>vital</em> utility for me; it allows me to quickly jump between my current location & places I've been previously. Z's API is simple; "z [string]" where "string" somewhere matches the place you want to go. So if I'm destroying system settings in "/Library/Application Support" & then need to go to my Doge Decimal project, I type "z doge" & am transported to "/Users/phette23/code/dogedc". But Z is a shell script; it's written in Bash. Luckily I found <a href="https://github.com/sjl/z-fish">a port for Fish</a>, but for a while I was trying really hacky solutions (including proxying Z through Bash every time I ran it). Other tools, like <a href="https://github.com/creationix/nvm">nvm</a>, pose this same problem.<br />
<br />
To be fair; various incompatibilities aren't Fish's fault. They can only be solved by popularity, so when someone writes a script they think "I need this to work in all the popular shells: Bash, Zsh, & Fish". Sublime Text proved to be the biggest compatibility pain. Sublime uses <code>os.environ['PATH']</code> to find the user's path & this path is used in all kinds of plug-ins. I use several linting plugins, such as <a href="https://github.com/SublimeLinter/SublimeLinter-jshint">SublimeLinter-JSHint</a>, which rely on JSHint being in your path. But Fish separates path locations with a space & not a colon; Sublime consequently misreads the whole PATH string, breaking almost every plugin I've installed.<br />
<br />
I found a way around…and it was to default back to Bash. I ran <code>chsh -s /usr/local/bin/bash</code> to switch my default shell back to Bash, so when Sublime runs <code>os.environ['PATH']</code> it comes back with a predictable, colon-separated path. But then, because I actually want to use Fish, I had to edit all my terminal emulator profiles (I use <a href="http://iterm2.com/">iTerm2</a>) such that, instead of running as login shells that would default to Bash, they execute the <code>/usr/local/bin/fish</code> command. A surmountable problem, but it took me weeks to identify what was wrong & how to fix it.<br />
<br />
In general, Fish users will run into more compatibility problems with all sorts of tools that assume a Bash or strict POSIX environment. As I said, much of this isn't Fish's fault, but it is worth noting that the shell doesn't strive for 100% POSIX compliance. In a way, this is necessary; Fish conflicts with POSIX only where a substantial benefit in usability is at stake. That's great, but it also causes headaches that can't be easily fixed since backwards compatibility is broken.<br />
<br />
While Fish breaks with some POSIX traditions, in other places it doesn't go far enough. It relies heavily on double-underscored internal functions; anywhere there's a naming convention like this, there are scoping problems. It's not clear to me why all shell scripting languages lack true objects; everything ends up in the global scope. While Fish has nice arrays, certainly better than Bash, it still lacks data structures that aid in organization. A hash/dict/associative array type is badly needed. I think this might be a place where Windows <a href="https://en.wikipedia.org/wiki/Windows_PowerShell">PowerShell</a> improves upon POSIX shells, though I haven't used PS enough to truly know.<br />
<br />
There are also things I genuinely like about Bash. I like its <code>||</code> & <code>&&</code> logical operators, which behave slightly different from the natural language <code>or</code> & <code>and</code> of Fish. I like some of Bash's crazy-looking expansions, like <code>!!</code> (references the last command), which are weird & hard to remember but handy at times.<br />
<br />
My main struggles with Fish revolve around output redirection, which it seems to be more stringent about. I still haven't found a nice way to quietly test if a command exists (which occurs all throughout my dotfiles, since I try not to assume a particular software setup). In Bash, this was simple with <code>command -v $PROGRAM</code>. But <code>command</code> is a shell built-in, not an external program, & so it differs in Fish. Fish doesn't replicate the "v" flag, it only uses "command" as a way to bypass aliases. I've worked around it with a two-line solution: <code>PROGRAM --version >/dev/null; if test $status…</code>. This runs the program, silencing all output, & then checks the exit status (which would be 0, signifying an error, if the command didn't exist). It works, but it's slower & more verbose.<br />
<br />
There's more than you ever wanted to know about my transition to Fish shell. I'm guessing that switching shells isn't something people consider very often. Those who use the command line rarely probably don't think it's worth the trouble (or don't even know/care that it's possible), while those who rely on the command line necessarily build up lots of dependence on a specific environment. Despite all that, I'd strongly recommend Fish to anyone and I thoroughly enjoy using it every day. The pains are, oddly enough, lesser for inexperienced shell users, while the benefits are greater thanks largely to how sane and helpful Fish is designed to be.Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com2tag:blogger.com,1999:blog-5067904571139905755.post-67016777510201450052014-08-07T14:49:00.002-04:002014-08-07T14:49:44.369-04:00How Not To Do User Testing<ul>
<li>Perform tests only after a final product has already been rolled out</li>
<li>Use your tests to reify assumptions already built into the product</li>
<li>Test once and then never again because hey, you’re finished</li>
<li>Refuse to accept the validity of any given test until a statistically representative sample of your user populace has been obtained (it’ll never happen)</li>
<li>Never change your testing tasks and procedures, even the ones that prove to be deeply flawed, poorly worded, uninformative</li>
<li>Ask users for their opinions rather than observing what they actually do. “Do you like this background gradient?” is a particularly apt question.</li>
<li>Conversely, test only tasks you think are important without gauging what users think is important</li>
<li>Collect personal information and video recordings during tests with no plans for how to secure the data or when to delete it</li>
<li>Simply refuse to do user testing</li>
</ul>
Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0tag:blogger.com,1999:blog-5067904571139905755.post-52475982460268230442014-04-21T18:27:00.000-04:002014-04-21T18:27:39.653-04:00Looping Over Regular Expressions in JavaScriptMuch as JavaScript has literal forms for strings & numbers, it also has a literal Regular Expression (henceforth regex) form. So you can wrap characters in single or double quotes to make a string literal, & you can wrap characters in forward slashes ("/") to make a regex literal.<br />
<br />
So regexes are literals in JavaScript…except JavaScript is sort of a broken language with regard to literals. The <code>typeof</code> operator is nearly useless.<br />
<pre><code>typeof /foo/
// returns "object"…damn you, typeof
/foo/ instanceof RegExp
// true! hurrah for instanceof
</code></pre>
Because regexes have a literal form of sorts, you can do nice things with them like put them in an array & loop over the array:<br />
<pre><code>var tests = [/Foo/i, /BAR/, /baz/i],
str = 'This is a sentence. Foo, says the sentence.';
// check if each regex has a match in sentence
tests.forEach(function(re) {
if (re.test(str)) {
console.log(re + ' is a match!');
}
});
</code></pre>
This is the approach I use in my <a href="https://github.com/phette23/wp-spam-clicker">Wordpress Spam Clicker</a> bookmarklet: I have an array of regexes matching known spammer patterns which I loop over, testing each comment against them.<br />
<br />
BUT what if you want to slightly modify each regex in an array? For instance, what if you want to loop over regexes but test for strings with a space at the beginning the match? You can save some typing & potentially (depending on how big the array of regexes is) a lot of bytes by storing a truncated version of the regexes & then modifying them later. Except it doesn't work:<br />
<pre><code>var tests = [/Foo/i, /BAR/, /baz/i],
str = 'This is a sentence. Foo, says the sentence.';
// check if each regex has a match in sentence
tests.forEach(function(re) {
if ((/\s/ + re).test(str)) {
console.log(re + ' is a match!');
}
});
</code></pre>
I'm trying to take each regex & prepend the special character for a space ("\s"), so <code>/foo/i</code> should become <code>/\sfoo/i</code>. But the addition operator doesn't work here, JavaScript doesn't know how to add 2 regular expressions, instead it casts them to a string (<code>typeof (/\s/ + /foo/i) === 'string'</code>). What do?<br />
<br />
Well, JavaScript also has constructor functions for all its literals: <code>String()</code>, <code>Number()</code>, & <code>RegExp()</code>. Generally, <strong>you do not use these</strong>. I repeat, if you're writing code like <code>var count = new Number(0)</code> you can stop it, stop it right now. One reason is that <code>typeof count</code> will return "object" if <code>count</code> was created with a constructor. But also it's just an unnecessary amount of typing.<br />
<br />
BUT it turns out that compiling regexes from strings can be done using the <code>RegExp</code> constructor. So to achieve my earlier goal I can write:<br />
<pre><code>var tests = ['Foo', 'BAR', 'baz'],
str = 'This is a sentence. Foo, says the sentence.';
// check if each regex has a match in sentence
tests.forEach(function(re) {
if (RegExp('\\s' + re).test(str)) {
console.log(re + ' is a match!');
}
});
</code></pre>
Instead of storing regexes which are later cast to strings, I can store strings & then essentially cast them to regexes using the <code>RegExp</code> constructor. I have to escape the backslash to ensure <code>\s</code> makes it into the regex, but it works. The <code>RegExp</code> constructor takes the regex flags as a second argument too, so I could write <code>RegExp('\\s' + re, 'i')</code> to make all my regexes case insensitive. This, too, could be very handy & save a lot of bytes/typing.Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0tag:blogger.com,1999:blog-5067904571139905755.post-79196354684948645462014-03-30T19:44:00.000-04:002014-03-30T19:44:30.272-04:00Git ToolsOn a recent commute I mulled over the various tools I use to make git, the popular distributed version control software, easier to use and more powerful. Here's a round up of what I use, or find interesting.<br />
<br />
<strong><a href="http://githowto.com/aliases">gitconfig</a></strong> - anything you do with the <code>git config</code> command can also be placed in a .gitconfig runtime configuration file in your home directory. My main use case is aliases that save me a <em>ton</em> of typing. Simple shortcuts like "c = commit -m" are an obvious starting point. But git also has several commands with multiple, handy flags; for instance, my "sweet-looking but concise logs" alias is "l = log --pretty=oneline -n 20 --graph --abbrev-commit". I do not want to memorize and type that monster, not once, not ever. <a href="https://github.com/phette23/my-dotfiles/blob/master/config/.gitconfig">My entire .gitconfig</a> can be found in my dotfiles repo.<br />
<br />
<strong><a href="https://github.com/thoughtbot/gitsh">gitsh</a></strong> - an interactive shell for git. If you're running a bunch of git commands in a row, enter <code>gitsh</code> & run them without typing "git" over & over. This can be very useful as git commands tend to come in waves; "oh I need to commit these final changes, rebase, checkout master, & then merge this feature branch" is a common workflow, for example. Gitsh also displays repository information—the current branch & working directory status—in its prompt.<br />
<br />
<strong><a href="http://hub.github.com/">hub</a></strong> - a command-line tool for interacting with GitHub. I don't use this because it wouldn't gain me a whole lot of efficiency but I bet hub would be invaluable if you're an active GitHub user.<br />
<br />
<strong><a href="https://github.com/creationix/js-git">js-git</a></strong> - an interesting project to implement git in client-side JavaScript with support for various browser storage APIs. Could feasibly bring git into environments like Chromebooks, where one doesn't have command-line access but could still benefit from version control.<br />
<br />
<h4>
<a class="anchor" href="https://www.blogger.com/blogger.g?blogID=5067904571139905755#sublime-text-packages" name="sublime-text-packages"><span class="octicon octicon-link"></span></a>Sublime Text Packages</h4>
I use Sublime Text as my main editor & these two packages are great in terms of git integration.<br />
<br />
<strong><a href="https://sublime.wbond.net/packages/Git">Git</a></strong> - this package is essential if you're working in Sublime Text. It gives access to all the common git commands—add, commit, diff, log—right in the command palette. You never have to leave your editor to access version control, you can stay in a single context & do everything you need. It's a huge boon to productivity. I use "Quick Commit" (adds and commits the file I'm currently viewing) <em>all the time</em>. I bet roughly a third of my commits are through that single convenience method.<br />
<br />
<strong><a href="https://github.com/jisaacks/GitGutter">GitGutter</a></strong> - highlights lines that have been changed, added, or deleted in the file you're viewing with coloring in the gutter. You can select from a few different styles of coloring. This is a small nicety most of the time but can be of great assistance when returning to a project that has a dirty working directory or stashed changes which you've forgotten.Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com1tag:blogger.com,1999:blog-5067904571139905755.post-67257811988671662122014-03-20T09:31:00.000-04:002014-03-22T19:34:57.041-04:00Start-Up Thinking Is Inappropriate for Libraries<b>tl;dr</b> — if you believe your institution is a social necessity, start-up thinking is a terrible approach.<br />
<br />
A recent conversation with a friend who has worked in the start-up space brought up Brian Mathew's "<a href="http://chronicle.com/blognetwork/theubiquitouslibrarian/2012/04/04/think-like-a-startup-a-white-paper/">Think Like a Start-Up</a>" white paper and some unresolved issues I have with it, never publicly articulated. See also: <a href="http://satifice.com/2014/03/14/the-marketing-unproblem-of-libraries/">The Marketing Unproblem of Libraries</a>.<br />
<br />
<h4>
#Fail</h4>
<br />
Most start-ups fail. Start-ups are praised for their agility, their ability to solve problems, but not for their longevity. If you believe in the worth of libraries as institutions, I'm guessing you don't want <a href="http://www.bizjournals.com/sanjose/blog/2012/09/most-startups-fail-says-harvard.html?page=all">75% of them to go under</a>. It's unfathomably, eye-rollingly ironic that Mathews starts his white paper with doomsaying about the sustainability of academic libraries <i>and then offers transient organizations as a model for survival</i>. I can't even.<br />
<br />
Trying to flip this fact later in the white paper does little to assuage my concerns. Noting the failure-prone nature of start-ups is not simply some snarky observation; it speaks to irreconcilable differences between how start-ups are run & how are libraries should be run. If you want your library to be around next year, next decade, next century, you probably don't want to emphasize risk-taking. <a href="http://longnow.org/">Long-term thinking</a> might be more suitable. You probably don't want to be a <a href="http://www.amazon.com/Save-Everything-Click-Here-Technological/dp/1610391381">technological solutionist</a>. Heck, you probably don't want to rely on the assumption that you only need to serve a population with access to certain technologies. Making an iPhone app is not enough. Making any app is not enough. Being a community-driven organization just might be enough.<br />
<br />
It's also worth mentioning start-up culture has its own atrocities. It's <a href="http://modelviewculture.com/pieces/dissent-unheard-of">hostile to women</a>.* It's hostile to people of color. They're just generally not the type of organization socially conscious people probably want to work for, not that there aren't exceptions to this generality. I find it intolerable to valorize start-up culture while its downsides go unmentioned.<br />
<br />
<h4>
On Choosing Appropriate Proxies</h4>
<br />
I envision a rejoinder that libraries should praise & emulate the agility & innovativeness of start-ups, focusing on those attributes rather than their ephemerality. Leaving aside the fact that this straw-person argument is basically "but if you only look at the good things start-ups are good," it hints that start-ups are a poor proxy for what we actually want to talk about. I despise poor proxies. They muddle the debate & obscure the underlying issues. To use my favorite example: when we use age as a proxy for technical savvy, we not only discriminate against older folks but overestimate the abilities of the young. So let's discuss "libraries should be agile & innovative," not "libraries should think like start-ups."<br />
<br />
But that's a lame tag-line right? And tag-lines are important. It's catchy, "Think like a Start-Up." But if it's so misleading as to be positively counterproductive, it should be ditched.<br />
<br />
<h4>
<i>Exeunt</i></h4>
<br />
Finally, there's perhaps a tension in that start-ups are capitalist institutions <i>par excellence</i> & modern libraries** typically follow a more socialist, resource-sharing approach. But that's too much to go into here & I haven't thought about it enough.<br />
<br />
In general, there are virtually no similarities between what libraries should be(come) & what start-ups are. Mic drop.<br />
<br />
<h4>
Notes</h4>
<br />
* There are numerous examples or articles I could have linked to here but Ashe Dryden's is particularly apt. If you think this statement is contestable, leave a comment & I can cite additional instances of hostility.<br />
<br />
** Obviously "social libraries" like Benjamin Franklin's Library <i>Company</i> of Philadelphia (unnecessary emphasis mine), where only subscribed members could access the collection, aren't following a very socialist model. These are less common in America today than tax-funded public libraries, for instance.Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0tag:blogger.com,1999:blog-5067904571139905755.post-17853740078478975692014-01-31T08:32:00.001-05:002014-01-31T08:32:11.727-05:00Open Letter to Middle States Commission on Higher Education<a href="http://www.msche.org/">Middle States</a>, one of the major higher education accrediting bodies, is seeking feedback on a new set of <a href="http://www.msche.org/documents/characteristicsdraft120913.pdf">Characteristics for Excellence</a> [pdf] in Higher Education. They have <a href="http://www.surveymonkey.com/s/CHXRevisions2013">a survey</a> which is open for comment but only until the end of today (1/31/14) so I encourage everyone to read the draft and submit feedback. For reference, it may help to read the previous <a href="http://www.msche.org/publications/CHX-2011-WEB.pdf">Characteristics of Excellence</a> though they're a lot longer and more convoluted, IMHO.<br />
<br />
<h4>
tl;dr</h4>
<div>
Accrediting standards for libraries should be more rigorous, certainly not entirely absent.</div>
<div>
Also, stop making assumptions about why students attend higher education institutions.<br />
<br />
Below are the survey questions and my responses to them.<br />
<br /></div>
<hr />
<h4>
5. Provide any general comments on the draft of the Characteristics of Excellence (MSCHE accreditation standards):</h4>
<div>
There's a glaring lack of consideration for libraries, information literacy, and library services in the draft. Specific weaknesses will be addressed in the answers below.</div>
<div>
I do want to say that I appreciate the authors' focus on brevity. Whatever my complaints below, this draft is far easier to read, understand, and reason about. This is not only due to its conciseness but also due to the reduced redundancies: no longer must one constantly cross-reference between standards when investigating a single topic, such as assessment of student learning outcomes. It is commendable that this was clearly a focus of the authors.</div>
<br />
<div>
There is a reason that every higher education institution in America has a library in some form or another, but if institutions were held to these draft standards a library would be an unnecessary expense. Hopefully in my following answers it will become clear why higher education institutions have always had and continue to need libraries.<br />
<br /></div>
<h4>
6. Provide specific comments about the ability of the revised accreditation standards to honor the diversity of institutional mission:</h4>
<br />
<div>
This passage from Standard IV makes assumptions about the reasons why students attend institutions: "the successful achievement of students’ educational goals including degree completion, transfer to other institutions, and post-completion placement". While those are only examples, they reduce education entirely to credentialing (degrees) and job placement. This neglects civic duties like preparing students to be informed, critical, and engaged citizens but also many other educational missions (lifelong learning, job promotion, understanding others, bettering one's self, curiosity, entertainment even). Really, it should either read "the successful achievement of students’ educational goals" with no examples that make damaging assumptions about why students attend the institutions that they do or encompass a far broader range of educational missions. Isn't it enough that institutions support students' goals, not what they think students' goals should be?</div>
<b></b><br />
<h4>
7. Provide specific comments regarding the ability of the revised accreditation standards to measure and demonstrate academic rigor and institutional quality:</h4>
<br />
<div>
Standard III #5 which outlines a general education program does not include information literacy, which was covered in the past standards. I would hardly call education which doesn't include information literacy rigorous or quality. While the draft's authors perhaps think that critical analysis and technological competency encompass information literacy, the discipline exceeds those two in places. For instance, critical reasoning does not cover efficiently accessing information, incorporating it into one's knowledge base, employing it to accomplish a specific purpose, or understanding its surrounding ethical/legal/technical issues in the same way that, say, the Association of College and Research Libraries' information literacy standards do. If anything, information literacy is a prerequisite for any critical analysis and more worthy of inclusion. It would be difficult to critically analyze sources when staying within the prescribed arena of assigned readings and one's own filter bubble online, for instance, yet the draft standards do not assure that students will have the means of identifying, seeking, and finding information outside of those areas. Similarly, technological competence doesn't extend to retrieving documents from information systems or ethical inquiry into the innate bias of different technologies and how that bias shapes the availability of information. To mention filter bubbles again, one can be perfectly "competent" at Google searching without realizing that it serves different results to different users depending on a variety of factors such as geographic location, gender, and the web browser being used.</div>
<br />
<div>
Libraries also provide access to scholarly resources. With no institutional obligation to provide a library to its students, the quality of information available to students cannot be validated. Even if instructors were excellent and the student experience sublime, the scholarly materials available would be lacking. While an increasing amount of academic material is available freely on the web, the vast majority of scholarly literature is still locked in subscription databases. Furthermore, academic information on the web is scattered, difficult to find, and hidden amongst sources of more dubious quality. Couple that with the fact that you have not required that students be information literate and there is simply no hope that they will learn to read and use high-quality information, an effect that in turn reduces institutional quality. Finally, librarians are trained and strive to provide information from multiple paradigms. Without them, academia becomes an exercise in confirmation bias; there's no assurance that students or even faculty will seek out, or have available to them, alternate points of views.</div>
<br />
<div>
Probably for the reasons outlined above, many other major higher education associations recognize information literacy as a key competency, e.g.:</div>
<div>
AACU "LEAP" Essential Learning Outcomes: <a href="">http://www.aacu.org/leap/documents/EssentialOutcomes_Chart.pdf</a></div>
<div>
New England Association of Schools and Colleges Commission on Institutions of Higher Education, Standard 4 "The Academic Program"</div>
<div>
<a href="">http://cihe.neasc.org/standard-policies/standards-accreditation/standards-effective-july-1-2011#standard_four</a><br />
<a href=""><br /></a></div>
<h4>
8. Please provide specific comments related to the ability of the revised accreditation standards to measure the quality of the student experience - both within and outside of the classroom:</h4>
<div>
Standard IV #6 mentions review of student support services "designed, delivered, or assessed by third-party providers" but does not apply the same to in-house services. Apparently only outsourced services need to be of sufficient quality? Or is the implication that support services should not be developed locally?</div>
<br />
<div>
Further, libraries are essential to student experience and go completely unmentioned in the draft. Libraries provide space for study, whether in collaborative groups or in quiet isolation, as well as territories for intellectual exploration. These territories are increasingly digital; do not picture simply a student roaming tall shelves filled with volumes, but also browsing interactive digital archives that serve both to sustain cultural memory and stimulate curiosity. What's more, many libraries are engaged in creative endeavors that involve facilitating student production of various artifacts, whether those be videos, podcasts, publications both print and online, or artifacts produced by three-dimensional printers.</div>
<br />
<div>
It is not merely that libraries go mentioned which is disconcerting, but that Middle States has never sufficiently assessed libraries. I am currently in the middle of a self-study and am personally disappointed at how meek the library requirements are. The standards seem to ask "Do you have a library? If so, check yes." They do not ask that library services be responsive to student needs and assessed for their efficacy. Holding libraries to the very low standard of mere existence damages both the profession of librarianship and higher education at large. If anything, rather than excising all mention of libraries from the Characteristics, your organization should seek more substantive demonstrations of value from libraries.<br />
<br /></div>
<h4>
9. Please provide specific comments related to the ability of the revised accreditation standards to maintain a focus on continuous improvement while demonstrating meaningful institutional outcomes:</h4>
(I had nothing to say here, plus was rambling too much elsewhere, so left it blank)<b></b><br />
<b><br /></b>
<h4>
10. Please provide specific comments related to the ability of the revised accreditation standards to encourage and support innovation:</h4>
<br />
<div>
While I see nothing in the standards that specifically encourages innovation, I see much that limits it. Specifically, commitments to specific planning, documentation, and reporting structures limit the agility of institutions, particularly small ones. Innovation is given lip service in Standard III #2 subsection d, which is shared with professional development. If it's so important that you ask for feedback in this survey, perhaps it deserves more prominent focus in the Characteristics.</div>
<br />
One improvement might be recognizing the role that failure plays in innovative organizations. Language is powerful and an accrediting body actually acknowledging that failure can be a learning and growing experience would be of immense benefit to higher education. Accreditation has traditionally been a punitive exercise; do something wrong and you are warned or lose accreditation. What if you reframed it as a process that rewards experimentation? Experiments often do not work out, but if results are shared properly then they prevent others from making the same mistakes and increase the likelihood that future efforts will succeed. Encourage effort and sharing as opposed to punishing failure.Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0tag:blogger.com,1999:blog-5067904571139905755.post-34082851333513601732014-01-10T18:21:00.000-05:002014-01-27T10:40:50.751-05:00Philosophizing: Minority, Numbers, Gender, Librarians<h4>
Update (1/27/14)</h4>
I'm going to leave the post below intact, but after the #libtechgender panel I want to confess a glaring problem with this post: it's pretty clearly essentializing gender (e.g. the penultimate paragraph). If I took away one thing from the panel, it was the importance of understanding <a href="http://plato.stanford.edu/entries/discrimination/#Int">intersectionality</a> and that many people have multiple attributes which are oppressed (gender, race, ableness, sexual orientation, class, religion...there are more). Focusing on one difference downplays this intersectionality. For some of the panel's content, <a href="http://chrisbourg.wordpress.com/2014/01/25/gender-issues-panel/">Chris Bourg</a> and <a href="http://cecily.info/2014/01/25/moving-the-libtechgender-conversation-forward/">Cecily Walker</a> both wrote blog posts. Those posts were written before the panel so they don't necessarily cover all that we talked about but they're great reads on these issues.<br />
<hr />
<br />
Before I participate in a panel on #libtechgender at ALA MidWinter, I wanted to articulate some thoughts that have been on my mind.<br />
<br />
The word "minority" is unfortunate because of its numerical connotations. When we speak of a "minority" group of people, the proportion of group is not at issue. My thinking follows Deleuze & Guattari:<br />
<blockquote>
The notion of <i>minority</i> is very complex, with musical, literary, linguistic, as well as juridical and political, references. The opposition between minority and majority is not simply quantitative. Majority implies a constant, of expression or content, serving as a standard measure by which to evaluate it. Let us suppose that the constant or standard is the average adult-white-heterosexual-European-male speaking a standard language (Joyce's or Ezra Pound's Ulysses). It is obvious that "man" holds the majority, even if he is less numerous than mosquitoes, children, women, blacks, peasants, homosexuals, etc. That is because he appears twice, once in the constant and again in the variable from which the constant is extracted. Majority assumes a state of power and domination, not the other way around. It assumes the standard measure, not the other way around. — <cite>A Thousand Plateaus</cite>, pp.116-7</blockquote>
This is particularly relevant in America. Here, whites are about to (have already? I'm being a bad librarian and not looking this up) become a <i>numerical minority</i>. And doubtless some pundits will use this to argue that white people should benefit from affirmative action and other programs, opportunistically preying upon a misunderstanding of the word minority. What makes white people a majority is their status as a standard, not their quantity. D&G's example is perfect: white heterosexual men are not a numerical majority, but they are a standard. So much assumes their viewpoint.<br />
<br />
Other than avoiding silly conclusions, recognizing the non-numerical status of the majority/minority group helps in one other way: it hints that solutions will not be arithmetical. Numbers are great proxies but they are not the thing itself. As a hypothetical, consider if we attain female representation at library technology conferences in equal proportion to the number of female library technologists. Is our work done? Gender equality! The numbers are equal thus equality! No, again, equality is not a numeric term here. Not until women not only participate in similar proportion but also feel as comfortable, are respected as much, etc. is there anything that could be called equality. So increasing female participation numbers is great, but only as a means to this non-numeric equality. And also, this non-numeric equality doesn't mean "we're all the same" which I feel is used as a pedantic counterargument to liberatory politics. "Equality" means no one group hold the majority position. No group plays standard, has their viewpoint assumed.<br />
<br />
There is a lot more to talk about but I'm only going to outline it because digression. Just as equality does not mean we're all leveled into one homogenous mass of humanity, it does not mean power struggles suddenly disappear (on the contrary, power would be more fluid, would circulate far more). Also, the notion of "minority" is incredibly strong in D&G, as the quote above implies. It's an artistic, social, <i>ontological</i> notion even. Because majority is more standard than highest proportion, those considered within the majority group can "become-minority" (specific examples abound in D&G, such as "becoming-woman", "becoming-animal"). This is where the majority members can realize their own liberation. They too are not held to a standard, can embrace alternate ways of being. As an example, patriarchy hurts men, too. They must be manly, be heterosexual, not cry, not show emotion, not get beat up, etc. Ultimately everyone runs into a limitation of the standard, a point where they do not meet its demands.<br />
<br />Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0tag:blogger.com,1999:blog-5067904571139905755.post-75863442134151392482013-12-29T19:13:00.001-05:002013-12-29T19:13:58.369-05:00Top 10 Albums of 2013Just in time for the New Year, here's another list for you and another digression from my usual topics.<br />
<br />
<hr />
<br />
1. Flaming Lips / <i>The Terror</i> — the last two Flaming Lips albums have been excellent. They're dark, ragged affairs, not at all the polished weird pop of <i>Yoshimi</i>.<br />
<br />
2. Jon Hopkins / <i>Immunity</i> — one of the best electronica albums in years. Crunchy, huge, pounding. Not exactly beat- or melody-driven, just amazing sounds.<br />
<br />
3. Altar of Plagues / <i>Teethed Glory and Injury</i> — a black metal band that's coming full circle back around to riffs. There isn't as much tremolo picking here as tense atmosphere & well-timed brutality.<br />
<br />
4. Kanye West / <i>Yeezus</i> — I liked <i>My Beautiful Dark Twisted Fantasy</i> a lot, but <i>Yeezus</i> does everything that album did—staggering egotism—better, with a more cohesive sound. A friend, not having heard the album, described it as "industrial rap," a weird label for Kanye since he's always been a pop artist at heart. But the beats are as much NIN as Just Blaze. The fact that there's only 10 songs & that there are fewer grandiose digressions (e.g. "All of the Lights") makes it more focused. MBDTF was interesting for its sprawling, diverse nature, but Kanye would do well to limit the sheer number of ideas & contributors he packs into his albums. He has plenty of creativity on his own & <i>Yeezus</i> shines due to that.<br />
<br />
5. James Blake / <i>Overgrown</i> — Blake has a tremendous voice which quivers with insecurity, love, & despair. It's a powerful instrument sorely lacking in the dubstep scene which makes his work stand out. Blake's last album was good but inconsistent: I listened to the superlative first three tracks ("Unluck", "The Wilhelm Scream", & "I Never Learnt to Share") over & over, skipping the rest of the album. <i>Overgrown</i> lacks obvious standouts & is better for it. It's a rich experience where songs fluidly intermingle, no abrupt drops in quality.<br />
<br />
6. Earl Sweatshirt / <i>Doris</i> — Odd Future's output has been erratic. Even the good albums (mostly Tyler, the Creator's, but Frank Ocean's <i>Channel Orange</i> too) tend to have half-baked songs that shouldn't have made the cut. Doris is the first great album by the crew's most talented member. Sweatshirt's flows are dense & rhyme-laden. He isn't a fast rapper or witty, he's obsessed with the <i>sound</i> of language & it shows. That the songs tend to be moody productions with plaintive lyrics (e.g. "Chum" & it's "get up off the pavement, brush the dirt up off my psyche" refrain) is a bonus.<br />
<br />
7. Windhand / <i>Soma</i> — very low, overwhelmingly distorted metal. A nice job with the "voice lost in the machine" dynamic which has always been a favorite of mine.<br />
<br />
8. Daniel Avery / <i>Drone Logic</i> — this album hit a sweet spot for me. I've missed acid techno so much (Aphex Twin, where are you? Come back to us.) & <i>Drone Logic</i> does it straightforward, no frills, well.<br />
<br />
9. Sadgiqacea / <i>False Prism</i> — brutal metal with a healthy dose of dissonance. Combines slow droning with rapid black metal, often in the same song. Sadgiqacea change things up just enough to make the music interesting without reducing its molten impact. The shortest song, "False Prism", demonstrates these strengths well, starting with a few echoing, quiet notes from a clean guitar before diving into frantic picking, and then towards the end becoming a slow, chugging affair.<br />
<br />
10. The Range / <i>Nonfiction</i> — is this what trip-hop is nowadays? I like it. The songs with looped vocal samples toward the beginning are the best, like "Metal Swing".<br />
<br />
<h4>
Honorable Mentions</h4>
<br />
Burial / <i>Rival Dealer</i> — I've cheated in the past by putting Burial EPs on what's supposed to be a list of <i>albums</i> (what's an album, anyways?) so I'll attempt to make up it for it by leaving <i>Rival Dealer</i> off. It's great, though. Burial's recent swing into anthemic, ≈10 minute songs on his last three EPs—<i>Rival Dealer</i>, <i>Truant / Rough Sleeper</i>, <i>Kindred</i>—is wonderful. He's always been a master of atmosphere & the two-minute interludes of rustling wind work better as slow-downs in otherwise intensely emotional music as opposed to separate tracks.<br />
<br />
The Haxan Cloak / <i>Excavation</i> — creepy, dark, & consistent in its execution. It's a good album but a bit too slow-moving, the atmosphere too thin in places.<br />
<br />
James Holden / <i>The Inheritors</i> — pretty weird electronic music, the sort that sounds like circuits being twisted & soldered together rather than keys on a synth being pushed. Organic electronic.<br />
<br />
Moderat / <i>II</i> — solid mix of R&B & electronica, catchy without being too predictable.<br />
<br />Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0tag:blogger.com,1999:blog-5067904571139905755.post-48045160350410345752013-12-04T17:37:00.000-05:002013-12-04T17:38:54.802-05:00My All-Time NBA Starting FivePossibly-surprising fact: I'm really into the NBA. This post is a serious detour from my usual subjects.
<br />
<hr />
<br />
<h3>
PG - <a href="http://www.basketball-reference.com/players/j/johnsma02.html">Magic Johnson</a></h3>
Considered: <a href="http://www.basketball-reference.com/players/s/stockjo01.html">John Stockton</a>, <a href="http://www.basketball-reference.com/players/r/roberos01.html">Oscar Robertson</a><br />
<br />
Magic Johnson is one of the most anomalous players the NBA has ever seen. More so than anyone else, he could (and did) play every position. He could rebound like a center and pass like a point guard. Magic's 52% career shooting average is the highest amongst point guards and his rebounding percentage (11.1%) probably ranks up there as well. His ability to play multiple positions defensively while leading fast breaks is a devastating weapon. To put it in modern terms, Jason Kidd has been an elite (for many years, arguably the best) point guard in the NBA; Magic Johnson does everything Kidd does but significantly better (Kidd's three-point shooting ability towards the end of his career aside).<br />
<br />
It's difficult to leave Stockton off this list. He was never an elite scorer, but excelled everywhere a point guard should: good three-point shooter, terrific defender, better at generating steals than Magic, assisted on over half of the possessions where he touched the ball (an unbelievable statistic). In some ways, when building a team, it's better to have a prototypical point guard rather than someone unique like Magic, whose strength comes from his size and rebounding, not traditional point guard qualities. Stockton's ability to space the floor and set up others would doubtless serve a team of superstars well, given that there would be no lack of scoring talent on the floor. In the end though, the tremendous mismatches that Johnson causes (there isn't a point guard in the world who can guard him on the low post) as well as his additional rebounding win out.<br />
<br />
The Big O is also difficult to leave off, but not necessarily because he's a comparable talent. Unfortunately, without a three-point line and many statistical categories (steals, turnovers), it's tough to tell just how good Oscar Robertson was relative to Magic. He had the same all-around type of game, with tremendous rebound totals for a point guard, and shot a very good 48.5% from the field while taking 7.5 free throws per 36 minutes (see how close those figures are to Jordan's below). But for what info we do have, he falls behind Magic in many vital categories: rebound, assist, and effective field goal percentages, per-minute assist and rebound totals, win shares. Robertson's star was built off of a few extraordinary early seasons and playing 40+ minutes per night; his career was great but not best-in-class.<br />
<b><br /></b><br />
<h3>
SG - <a href="http://www.basketball-reference.com/players/j/jordami01.html">Michael Jordan</a></h3>
Considered: No one else comes close.<br />
<br />
Shooting guard is the only easy choice on this entire list. Jordan is head and shoulders above any other shooting guard to have played the game. Jordan was known for being a dynamic scorer, someone who could create his own shot with ease and take defenders off the dribble. But he had a stunningly complete game: he rebounded better than your average guard, was every bit as amazing a defender as a scorer, generated steals, passed fairly well, and turned the ball over surprisingly rarely given the amount of time he spent handling it (9.3 career turnover percentage). Jordan shot a high percentage for a shooting guard at 49.7 and generated 7.7 free throws per 36 minutes with his aggressive drives. His only weakness is his poor three-point shooting: towards the end of his Chicago days he had a couple good years, but he was a lifetime 32.7% shooter from beyond the arc putting him well below most modern shooting guards.<br />
<br />
Some would argue that <a href="http://www.basketball-reference.com/players/b/bryanko01.html">Kobe Bryant</a> is, if not better than Jordan, at least in the same league. There is no statistical validity to this argument. Bryant is worse is every category I mention above—rebounding, steals, turnovers, shooting efficiency, defensive win shares, free throws attempted per minute. It's barely true that Bryant is a better three-point shooter, but he's still below what you want from a SG at 33.6%. Objectively, he does not belong in the conversation. <a href="http://www.basketball-reference.com/players/e/ervinju01.html">Julius Erving</a> would be an interesting pick as he is actually better at some things—he rebounded and blocked shots at a SF level—but in the end Jordan is just clearly superior in too many categories to consider anyone else.<br />
<br />
<h3>
SF - <a href="http://www.basketball-reference.com/players/j/jamesle01.html">LeBron James</a></h3>
Considered: <a href="http://www.basketball-reference.com/players/b/birdla01.html">Larry Bird</a><br />
<br />
Short forward has surprisingly few candidates for the All-Time Team. Up until about the past decade, when the NBA started to showcase wingmen with tremendous athletic gifts like LeBron James and <a href="http://www.basketball-reference.com/players/d/duranke01.html">Kevin Durant</a>, the position was not a premier one. Shooting guards and power forwards accounted for the majority of scoring; SFs were often role players who spaced the floor with shooting and provided defensive versatility, as typified by <a href="http://www.basketball-reference.com/players/b/bowenbr01.html">Bruce Bowen</a>. The few historical exceptions were either high-volume, low-efficiency scorers (<a href="http://www.basketball-reference.com/players/b/bayloel01.html">Baylor</a>) or rebounding beasts lacking an elite all-around game (<a href="http://www.basketball-reference.com/players/c/cunnibi01.html">Cunningham</a>, <a href="http://www.basketball-reference.com/players/d/debusda01.html">DeBusschere</a>, <a href="http://www.basketball-reference.com/players/r/rodmade01.html">Rodman</a>).<br />
<br />
Bird was a significantly better rebounder and distance shooter than James. James is a better passer who also turns the ball over a smaller percentage of the time. James, who seems to continuously improve in terms of scoring efficiency, has pushed both his true shooting and effective shooting percentages higher. He also makes up for his lesser (though dramatically improved) three-point shooting with an uncanny ability finish drives to the rim, which results in him attempting three more free throws per 36 minutes. In my book, free throws are the single most valuable source of points: they come with fouls which get the opponent into trouble and allow your defense to set itself on the next possession. In terms of defense, LeBron is clearly superior. Bird was a crafty and underrated defender but lacked lateral quickness; James not only blocks more shots than Bird (their steals percentages are close) but has more versatility with his quickness. In the end, that's what puts LeBron ahead for me: James has only a slight offensive edge but is a unique defensive talent. His win shares are significantly higher than Bird's which validates my conclusion.<br />
<b><br /></b><br />
<h3>
PF - <a href="http://www.basketball-reference.com/players/d/duncati01.html">Tim Duncan</a></h3>
Considered: <a href="http://www.basketball-reference.com/players/m/malonka01.html">Karl Malone</a><br />
<br />
This was a tough choice between two excellent players. Looking at their careers, a dichotomy becomes clear: Malone was a superior offensive player while Duncan is a better defender. Malone would benefit this team with his uncanny ability to run the floor for a man of his size and his scoring efficiency (57.7% career true shooting as opposed to Duncan's 55.1%). Malone actually wasn't a traditional big man offensively, however: he scored off of pick-and-rolls (playing with Stockton helped a lot here), dribble drives, fast breaks, and the occasional jump shot. He used his quickness as much as his strength to overcome defenders.<br />
<br />
Duncan, on the other hand, is more of a traditional back-to-the-basket big man. He scores mostly via post-ups but also pick-and-rolls. While Malone was also an all-NBA defender in his time, Duncan's defense stands head and shoulders above: Duncan's block percentage (4.6 career as opposed to 1.5), defensive rebounding (26.5% career to 23.5%), and defensive rating (an incredible 95 to Malone's respectable 101) are all ample evidence of this. He manages to defend excellently while committing half a foul less than Malone per 36 minutes as well.<br />
<br />
In the end, Duncan's defense outweighs the offensive benefit that Malone would bring. Also, while it'd be nice to have Malone's ability to run the floor coupled with the other fast-break superstars on this roster, it's actually even more appealing to have a post-up player in the mix. Duncan's presence down low could give the perimeter players a bit more space to take jump shots and drive into the paint. Since my center pick below isn't going to provide that low-post scoring, it's good to get it out of the power forward. Duncan's not just a good defender for a PF either, he's arguably one of the best defenders the game's ever seen at any position, but he still can't compete with my pick for center.<br />
<br />
<h3>
C - <a href="http://www.basketball-reference.com/players/r/russebi01.html">Bill Russell</a></h3>
Considered: <a href="http://www.basketball-reference.com/players/c/chambwi01.html">Wilt Chamberlain</a>, <a href="http://www.basketball-reference.com/players/o/onealsh01.html">Shaquille O'Neal</a>, <a href="http://www.basketball-reference.com/players/o/olajuha01.html">Hakeem Olajuwon</a>, <a href="http://www.basketball-reference.com/players/a/abdulka01.html">Kareem Abdul-Jabbar</a><br />
<br />
Center is, by far, the most difficult position to make a decision. The NBA has showcased many great centers, all of whom affected the game at both ends of the floor tremendously. Their high shooting percentages, team-leading rebounding, and massive defensive impact has historically made them basketball's premier position.<br />
<br />
Bill Russell is also probably the most noticeably flawed player on this list: he's not a good shooter by any means. Russell tended to shoot mid-range jump shots, the NBA's worst shot, resulting in a miserable effective field goal percentage of 44 and a true shooting percentage of 47.1. However, he was an excellent rebounder, above-average passer, and <i>superlative defender</i>. While we don't have block and steal statistics for his era, he lead the league in Defensive Win Shares an unmatched ten years in a row and in eleven of his thirteen years in the NBA. In fact, his <a href="http://www.basketball-reference.com/leaders/dws_career.html">Defensive Win Shares</a> are probably the most aberrational statistic in the entire NBA; they're almost 40% greater than the next best player (Duncan). He is the best defensive player ever. He won more championships than anyone else.<br />
<br />
While I listed many other centers who rightfully belong in the consideration, the only one who gives me serious doubts is Wilt Chamberlain. Chamberlain and Russell were contemporaries and there is ample evidence that Chamberlain was a superior player. Chamberlain had more win shares, shot an incredible percentage, and was a good (if not at Russell's level) defender. Yes, Russell won many championships, but on a set of deep Celtics teams that featured other superstars. Russell won 5 MVPs to Chamberlain's 4. In the end though, I have to pick Russell. In filling out a roster, you want someone who makes sense given your other players. Russell is a defensive anchor who doesn't need to put up shots offensively. He fits in any lineup. If I could have one player to build a franchise around, it would be Russell and I wouldn't regret it for an instant.
<b><br /></b>
<br />
<h3>
Caveats</h3>
Traditional NBA and modern (1980 and on) NBA stats are not comparable because of differences in pace and the three-point line. Older games had more shots, more misses, and stratospheric rebounding totals. New games benefit from the three point shot and new statistics, such as blocks, steals, and plus/minus figures. The three pointer is an ongoing problem; the line keeps getting moved back further. Contemporary players are shooting more difficult threes than Larry Bird did in the 1980s. I tried to correct for the NBA's changing rules but the effort is ultimately futile; we cannot know if the classic greats of the game could compete with even mediocre modern players. My unsupported guess is that athletes have evolved; LeBron James would crush Oscar Robertson if the two competed in their prime.<br />
<br />
I do want to take a moment to point out that David Stern and the NBA office clearly have an agenda behind their recent rule changes. They keep pushing the three point line back, a couple of inches every couple years now it feels like. They create new rules (no hand checks, the "no charge" semi-circle, the unspoken ban on travelling) which benefit drives. <i>They are trying to generate dunks with rule changes</i>. The contemporary NBA discourages long jump shots, zone defense, and perimeter play in favor of drives, isolation matchups, and flashy dribbling. Whether that is right in any sense is clearly irrelevant; it's a marketing choice and dunks are exciting. But I do wish somebody (for all the innumerable talking heads, I have yet to hear anyone mention what I consider to be an evident trend) would talk about it.Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0tag:blogger.com,1999:blog-5067904571139905755.post-18961112881169437172013-11-09T09:00:00.000-05:002013-11-09T09:00:08.837-05:00thanks to #libtechwomenYesterday, Travis Good of Make Magazine gave what I thought was a pretty good keynote. He talked about technology, progress, makers, community—it hit all the right spots.<br />
But one thing that crossed my radar, thanks to the wonder that is librarians on Twitter, is that much of his language was gendered:
<br />
<blockquote class="twitter-tweet">
<a href="https://twitter.com/search?q=%23litaforum&src=hash">#litaforum</a> women and children were makers too, back in the day!<br />
— emily Mitchell (@mitchee3) <a href="https://twitter.com/mitchee3/statuses/398883789281046529">November 8, 2013</a></blockquote>
<script async="" charset="utf-8" src="//platform.twitter.com/widgets.js"></script>
<br />
<blockquote class="twitter-tweet">
Back in the day there were craftswomen too... <a href="https://twitter.com/search?q=%23litaforum&src=hash">#litaforum</a><br />
— Karen Merguerian (@GKMerguerian) <a href="https://twitter.com/GKMerguerian/statuses/398883827831291904">November 8, 2013</a></blockquote>
<blockquote class="twitter-tweet">
Nerds can be women too.. <a href="https://twitter.com/search?q=%23justsaying&src=hash">#justsaying</a> <a href="https://twitter.com/search?q=%23litaforum&src=hash">#litaforum</a> :)<br />
— Cindi Blyberg (@ctblyberg) <a href="https://twitter.com/ctblyberg/statuses/398882723265134592">November 8, 2013</a></blockquote>
<script async="" charset="utf-8" src="//platform.twitter.com/widgets.js"></script>
<br />
<blockquote class="twitter-tweet">
Nerds can also be smart ladies you go to with hard problems :) <a href="https://twitter.com/search?q=%23litaforum&src=hash">#litaforum</a><br />
— LITAForum (@LITAForum) <a href="https://twitter.com/LITAForum/statuses/398882272528834561">November 8, 2013</a></blockquote>
I consider myself a pretty sensitive person with respect to these issues. In language as well as action, I try to let things be neutral and fair, evicting unnecessary and damaging assumptions. But I didn't notice the gendered language <i>at all</i> until I saw the tweets calling it out. And more than anything I want to say: I appreciate this. I need the reminder. We all do. It can't stand and it's not going to change unless people are persistent, unless they call put even the most seemingly-innocuous assumptions. Because they're not innocuous. Because we need to say what we mean, not something that's close but shrouded in the biases of our past.<br />
So, thank you, <a href="https://twitter.com/search?q=%23libtechwomen">#libtechwomen</a>, and everyone else who fights this fight. We appreciate it and learn from you.<br />
<script async="" charset="utf-8" src="//platform.twitter.com/widgets.js"></script>
Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0tag:blogger.com,1999:blog-5067904571139905755.post-4268688212606508922013-07-23T16:44:00.000-04:002013-07-23T16:44:43.595-04:00Adding LibGuides to Drupal's Search Results<p>This will be another super specific post about how to do something useful for libraries in Drupal. The <abbr title="too long; didn't read">tl;dr</abbr> is that you can use LibGuides <abbr title="eXtensible Markup Language">XML</abbr> Export, the Feeds module, and the Feeds XPath Parser module to make LibGuides show up in your Drupal site search results. So when users search for "english composition" and you don't have any study guides on your Drupal site, something relevant from LibGuides might show up.</p>
<p>I was inspired to do this by the <a href="http://books.google.com/books?id=IMsqx1cb5ecC&pg=PA75&lpg=PA75">Drupal in Libraries</a> book, though I haven't read it (I saw it mentioned in <em>American Libraries</em>). I didn't see specific details in the book's preview, and Michigan is putting the <abbr title="eXtensible Markup Language">XML</abbr> into their Solr search index which is too sophisticated for my small college, so I thought a brief write-up might benefit other libraries who have LibGuides but don't use Solr. Libraries using other CMSs might still benefit from the general outline, though the specific details won't be useful. I'd be <em>shocked</em> if Wordpress libraries couldn't do the same, using WP All Import or other plugins.</p>
<p>These directions are specific to Drupal 7; I bet the same can be achieved in 6 but I can't vouch for any of the settings or code being the same.</p>
<h4 id="set-up-libguides-modules">Set-up: LibGuides & Modules</h4>
<p>In order to do this, you have to do a couple steps first to prepare both LibGuides and Drupal.</p>
<ul>
<li>Purchase the <a href="http://guidefaq.com/a.php?qid=1927">Images and Backups Module</a> from Springshare. In my experience, the pricing is very reasonable, and the "images" part of it means you can upload images to LibGuides which makes adding them to guides much, much easier for authors.</li>
<li>Install <a href="https://drupal.org/project/feeds">the Feeds module</a>, a popular and well-maintained module for mass importing nodes from structured data (<abbr title="Really Simple Syndication">RSS</abbr>/Atom feeds, <abbr title="Comma Separated Values">CSV</abbr> files, <abbr title="Outline Processor Markup Language">OPML</abbr> files) into Drupal</li>
<li>Install the <a href="https://drupal.org/project/feeds_xpathparser">Feeds XPath Query</a> module which adds an extra parser to your Feeds installation, allowing you to import nodes from arbitrary <abbr title="eXtensible Markup Language">XML</abbr> documents</li>
</ul>
<p>Once you've done these three steps, download the <abbr title="eXtensible Markup Language">XML</abbr> export from LibGuides (Springshare will email you when it's ready) and enable both modules in Drupal.</p>
<h4 id="process-the-abbr-titleextensible-markup-languagexmlabbr">Process the <abbr title="eXtensible Markup Language">XML</abbr></h4>
<p>I don't work with <abbr title="eXtensible Markup Language">XML</abbr> much (shame, librarian, shame!) but this is a step where you could edit the LibGuides export to make it more useful as an imported node. In my pre-processing, I only wanted to accomplish one thing: when I import the nodes, I don't want any unpublished or private guides to be published in Drupal. We have a few under-construction or private guides that shouldn't show up in search results.</p>
<p>To do so, there's just one Drupal quirk you have to know: later on, in configuring the way your data maps to Drupal nodes, you'll be able to map the contents of an <abbr title="eXtensible Markup Language">XML</abbr> element to a Drupal node's "publication status" field. 1 means published and 0 means unpublished.</p>
<p>Luckily, the LibGuides <abbr title="eXtensible Markup Language">XML</abbr> has a <code><STATUS></code> element underneath each <code><GUIDE></code> which you can easily map to either 0 or 1. To process the <abbr title="eXtensible Markup Language">XML</abbr>, I performed a simple pair of search-and-replace operations in Sublime Text:</p>
<ul>
<li>Search for "<code><STATUS>Published</STATUS></code>" and replace with "<code><PUBLISH>1</PUBLISH></code>"</li>
<li>Search for "<code><STATUS>.*</STATUS></code>" and replace with "<code><PUBLISH>0</PUBLISH></code>"</li>
</ul>
<p>That second search and replace uses a teeny bit of <abbr title="REGular EXpressions">regex</abbr>: the period stands for "any character except a line-break" and the asterisk means "any non-zero number of the preceding character". So I'm searching for any non-empty string of text inside of a <code><STATUS></code> element and turning it into <code><PUBLISH>0</PUBLISH></code>, which works because all of my published guides no longer have a <code><STATUS></code> element after the first search-and-replace.</p>
<h4 id="configure-the-feeds-importer">Configure the Feeds Importer</h4>
<p>Back inside Drupal, we need to create a new content type and set up the Feeds module to receive our <abbr title="eXtensible Markup Language">XML</abbr> file.</p>
<ul>
<li>Under the "Structure" menu of the admin toolbar, select <strong>Content Types</strong></li>
<li><strong>Add content type</strong> and then give it a name and description, e.g. "Imported LibGuides"</li>
<li>Add fields to your new content type, which at the very least should contain two new fields: an "ugly <abbr title="Uniform Resource Locator">URL</abbr>" field for LibGuides that don't have a friendly <abbr title="Uniform Resource Locator">URL</abbr>, and a "friendly <abbr title="Uniform Resource Locator">URL</abbr>" field. You can make these Text field types with the standard settings.</li>
<li>Under the "Structure" menu of the admin toolbar, select <strong>Feeds importers</strong> (or visit {{drupal root}}/admin/structure/feeds)</li>
<li><strong>Add importer</strong> and then give it a name and description, e.g. "LibGuides Importer"</li>
</ul>
<p>There are a lot of settings here, which can seem intimidating, but is actually great. The Feeds module gives you control over how data is imported into Drupal and everything is straight-forward if you take the time to read through it. I'll walk through my basic settings, but just know that you could do whatever seems reasonable here and be OK; the only piece of this post you might need to reference are the XPath queries later on.</p>
<ul>
<li>Basic Settings
<ul>
<li>Attach to content type: select your LibGuides content type here</li>
<li>Periodic import: off, periodic import is only for grabbing nodes from web feeds, e.g. RSS</li>
<li>Import on submission: check</li>
</ul></li>
<li>Fetcher: File upload
<ul>
<li>Allowed file extensions: you can leave as is, but I put <abbr title="eXtensible Markup Language">XML</abbr> since I'll only be uploading <abbr title="eXtensible Markup Language">XML</abbr> files</li>
<li>Upload directory: leave as is</li>
</ul></li>
<li>Parser: XPath <abbr title="eXtensible Markup Language">XML</abbr> parser (this option only appears if you installed Feeds XPath Query)
<ul>
<li>Settings: see the section below on the XPath queries, but trust me this won't be that painful</li>
</ul></li>
<li>Processor: Node processor
<ul>
<li>Bundle: select your LibGuides content type again</li>
<li>Update existing nodes: this is a bit of a judgment call, but you'll be fine with either "Replace existing nodes" or "Update existing nodes."</li>
<li>Skip hash check: I leave this unchecked but you'd be fine either way</li>
<li>Text format: your call, I leave as "Plain text" which is fine for search results</li>
<li>Author: anonymous, or your user if you want to brag about how many nodes you made</li>
<li>Authorize: probably should leave checked</li>
<li>Expire nodes: Never</li>
<li>Mapping for Node processor: make the Title, Body, Published status, Friendly URL, and Ugly URL fields all map to an "XPath Expression" source. The two URLs fields are ones we created with our Imported LibGuides content type, so if you chose a different name for them back then they will appear differently in the Target drop-down options here.</li>
</ul></li>
</ul>
<p>Whew, we're done! I know that looks like a lot, but Feeds has a pretty nice UI for such a sophisticated and powerful module.</p>
<h4 id="parsing-abbr-titleextensible-markup-languagexmlabbr-with-xpath">Parsing <abbr title="eXtensible Markup Language">XML</abbr> with XPath</h4>
<p>Now for the fun part: we need to map <abbr title="eXtensible Markup Language">XML</abbr> elements in LibGuides to Drupal fields using XPath expressions. We also get to say things like that which only .01% of humans understand.</p>
<p>XPath is a query language for <abbr title="eXtensible Markup Language">XML</abbr>, if you know <abbr title="Structured Query Language">SQL</abbr> or <abbr title="Cascading Style Sheets">CSS</abbr> it's kind of similar. It gives you a way of traversing the structure of an <abbr title="eXtensible Markup Language">XML</abbr> document to retrieve the contents of various elements. The LibGuides <abbr title="eXtensible Markup Language">XML</abbr> is structured in a pretty logical, simplistic manner so writing our queries won't be tough. Back in the Feeds importer settings that we were just editing, select the <strong>Settings</strong> link under the Parser section. This gives us a menu where we can write our XPath queries. Here's the setup that I use with some English translations:</p>
<p>Context: //GUIDE</p>
<p>We want our queries to run in the context of each <code><GUIDE></code> element. We could do without this, but it means we'd be prepending <code>/LIBGUIDES/GUIDES/GUIDE/</code> to each query below, which is silly.</p>
<p>title: NAME</p>
<p>body: DESCRIPTION</p>
<p>Set the name of the LibGuide to the node's title and the body of the node to its description. The description is the brief sentence which shows up underneath the name of a LibGuide.</p>
<p>field_friendly_url: FRIENDLY_URL</p>
<p>field_ugly_url: URL</p>
<p>Each <code><GUIDE></code> element has two <abbr title="Uniform Resource Locator">URL</abbr>s, so we map both of those to the two custom fields we set up on our Imported LibGuides content type. Once again, if you named your fields something different, their machine-readable names (which is what you see in this menu, they're just lowercase with underscores instead of spaces) will be different.</p>
<p>status: PUBLISH</p>
<p>Remember when we edited the LibGuides <abbr title="eXtensible Markup Language">XML</abbr> to set up a <code><PUBLISH></code> element that's either 0 or 1? That's where this mapping comes into play, taking that Boolean value and using it as Drupal's publication status field.</p>
<p>You can leave all the "Select the queries you would like to return raw <abbr title="eXtensible Markup Language">XML</abbr> or <abbr title="HyperText Markup Language">HTML</abbr>" options unchecked. Note that this could provide some interesting options if you were doing more sophisticated things with LibGuides, since the <abbr title="eXtensible Markup Language">XML</abbr> export contains all the raw <abbr title="HyperText Markup Language">HTML</abbr> of the various boxes in each guide. Debug Options can also be left unchecked, although if you're testing this process I recommend checking them off. The debug options show you what Drupal found with each XPath query, which can help you configure the importer properly.</p>
<p>I leave "Allow source configuration override" unchecked as well. Since we just set up our XPath queries the way we wanted, there's no need to override them later. However, you could do something interesting where you set up a generic LibGuides importer in these settings, then have multiple different ways of mapping the <abbr title="eXtensible Markup Language">XML</abbr> into nodes.</p>
<h4 id="redirecting-imported-nodes-to-libguides">Redirecting Imported Nodes to LibGuides</h4>
<p>Before we actually import our LibGuides, we want to make sure they're handled appropriately. That is, we don't want people clicking on their search results simply to see some lame text and <abbr title="Uniform Resource Locator">URL</abbr>s on the screen, we want them to be redirected straight to the LibGuide.</p>
<p>There are probably other ways to do this, for instance the <a href="https://drupal.org/project/field_redirection">Field Redirection</a> module, but I use node templates, which are PHP templates that apply to only specific node types. Under the Templates folder of your theme (which will be somewhere in sites/all/themes likely) create a file named "node--imported-libguides.tpl.php" where "imported-libguides" is whatever you named your LibGuides content type but with hyphens replacing spaces. Inside that template, paste the following PHP:</p>
<pre><code><?php
// redirect user to LibGuide rather than node if user is not signed in
// uid 0 means anonymous user
if ( $user->uid == 0 ) {
// prefer friendly URL if available
if ( $node->field_friendly_url ) {
drupal_goto( $node->field_friendly_url[ 'und' ][ 0 ][ 'value' ] );
} else if ( $node->field_ugly_url ) {
// ugly_url should always exist but just in case, use a conditional
drupal_goto( $node->field_ugly_url[ 'und' ][ 0 ][ 'value' ] );
}
} else {
print render($content);
}
?>
</code></pre>
<p>I've written comments in the code, but essentially here's the path this code steps through:</p>
<ul>
<li>Is the user anonymous? If yes, redirect them. If not, we assume the user is some kind of editor, so we print out the lame text fields. This makes it easier for librarians to edit nodes after they've been imported, but assumes that your users don't have Drupal accounts. If they do, you'll need to consider the first <code>if</code> condition thoroughly to make sure only the right types of users are seeing the plain text.</li>
<li>Does the node have a friendly URL? If so, redirect anonymous users to it.</li>
<li>If not, the node must have an ugly URL, redirect anonymous users to that.</li>
</ul>
<p>I noted it above, but because it's so important: <em>if you allow users to create Drupal accounts, this template won't work well</em>. It won't expose confidential data or anything, but it's definitely meant for Drupal sites where all non-editor traffic is anonymous.</p>
<p>Your theme may also have a particular way of printing out nodes that you want to stick to; in that case, you'd be better off copying node.tpl.php or another node type template rather than using my code verbatim. You could put the logic piece of this code at the top of your node template, dropping the <code>else</code> clause at the end. That would work fine as long as it's named appropriately, e.g. "node--imported-libguides.tpl.php".</p>
<h4 id="were-almost-there">We're Almost There</h4>
<p>Now that our template is set and our importer configured, we need to create an importer node, give it a file, and let it run wild. Go to {{drupal root}}/import to see a list of available importers, including the default ones that come with the Feeds module and your LibGuides Importer. Select LibGuides Importer and you're greeted with the usual node editing form, except this time there's a place to upload a file towards the top. Use that to browse to the processed LibGuides <abbr title="eXtensible Markup Language">XML</abbr>, then upload it. You can leave the body and other fields blank.</p>
<p>Once you've created this node, it will have an <strong>Import</strong> tab with an identically named button. Simply click that and your nodes should be created in Drupal, with whatever debug messages you chose in the importer displaying as well.</p>
<p>Totally screwed up the XPath queries, causing a bunch of broken and useless nodes to be imported? No worries, the importer node that you just created has a <strong>Delete items</strong> tab which can delete any of the nodes which it imported. This makes trying out a Feeds importer rather risk free; just keep trying until you get it right.</p>
<h4 id="final-steps">Final Steps</h4>
<p>Drupal's internal search index will still need to index the new nodes before they show up in its results. You can run cron a few times depending on how many nodes you just added and they should show up. Try a search for the title of a LibGuide that wouldn't return any of your other pages, and make sure clicking on a LibGuide result from an anonymous session causes you to be redirected to the guide.</p>
<p>As LibGuides are added and removed, you'll have to sync them to their Drupal nodes again. However, once you've done the process once, it only takes a few minutes to grab a new XML export, upload it, and click the import button.</p>
Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com5tag:blogger.com,1999:blog-5067904571139905755.post-71486256765530005482013-07-01T11:07:00.000-04:002014-03-02T14:54:57.414-05:00Foreign For-In, or Python as a First Language...being a brief recap my experience at the Python Preconference at <abbr title="American Library Association">ALA</abbr> Annual. In general, the session was a smashing success and I was elated to see a diverse group of people picking up Python so quickly. Without going into details elsewhere, which I think other attendees or organizers will cover, here's one struggle and one pleasant surprise from the preconference.<br />
<br />
<h4 id="explain-a-for-in-loop">
Explain a For-In Loop</h4>
Describing how a for-in loop works was difficult and I repeatedly ran into attendees who just couldn't quite <a href="https://en.wikipedia.org/wiki/Grok">grok</a> it. A Python for-in loop looks like:<br />
<br />
<pre><code>for word in wordlist:
print word
</code></pre>
<br />
That would loop through the wordlist data structure, which we'll say is a list (similar to an array in other languages), printing each term to the screen. Simple, right? But it's actually pretty weird, because in the above example <em>what exactly is word</em>? It's a local variable that gets a new value each time through the loop. If for-in loops for lists didn't exist in Python, you might implement them like so:<br />
<br />
<pre><code>i = 0
while i < len( wordlist ):
# being super explicit here
word = wordlist[ i ]
print word
i = i + 1
</code></pre>
<br />
<code>len( wordlist )</code> here returns the length of the wordlist list, for non-Python people. Otherwise, I assume the syntax is straightforward for anyone who knows a little code. The biggest disadvantage to this implementation is you end up with two variables in the scope—<code>i</code> and <code>word</code>—neither of which is useful after the loop has run.<br />
<br />
I'm not sure my explicit for-in loop is more clear to a new programmer, but it's my conceptual model. Students struggled with understanding the for variable's name; where does <code>word</code> come from? In <a href="http://librarycodeyearig.github.io/python-preconference/lecture.html">the lecture</a>, Becky Yoose used this example:<br />
<br />
<pre><code>for fruit in pies:
print fruit
</code></pre>
<br />
The reaction from attendees seemed to be "since pies is a list of different fruits, the variable name has to be 'fruit' here." As if Python was somehow doing natural language processing to figure out a good descriptive term of an individual item in a thematic list. It's a weird thing to grasp conceptually, perhaps the crux being <em>you're getting a variable without any assignment statement</em>. That's a nice convenience for programmers coming from other languages but it obscures what's going on for learners.<br />
<br />
<h4 id="nested-loops">
Nested Loops & First Languages</h4>
On the other hand, I found that a lot of our exercises and final projects involved nested loops, sometimes three to four layers deep. Everyone seemed to absorb this without conceptual difficulty. Maybe it's my own experience speaking here, but I get more and more anxious the deeper my indents go. A lot of this anxiety is based in JavaScript, where blocks wrapped in curly braces tend to take up more space and are harder to parse than in whitespace-happy Python. The uglist code in the world is an <a href="http://benalman.com/news/2010/11/immediately-invoked-function-expression/">instantly-invoked function expression</a> which ends in a bunch of closed code blocks:<br />
<br />
<pre><code> }
}
}
}( 'this happens way too often in JavaScript' ) );
</code></pre>
<br />
Python's conveniences, like <code>range()</code> and how the for-in loop works seamlessly across different data types (lists, dictionaries, even strings. Strings, people!) are a serious boon to beginners. I still think JavaScript makes a great first language for a few reasons: 1) everyone already has it installed via their web browser, so there's zero setup barrier, 2) the web is where data and applications live these days and JavaScript is the language of the web, and 3) a trivial amount of jQuery can make cool things happen. Other languages require more investment before the cool things go down.<br />
<br />
But the setup process wasn't an issue for the preconference. We held a help session the night before and only two people came; one of them already had Python installed and on the Windows path, they just needed confirmation that they'd done it right. A number of factors contributed to the ease of setup: many attendees had Macs which typically come with a 2.6.x or 2.7.x version of Python, the Boston Python Workshop docs are great and cross-platform, and a fair portion of attendees were advanced computer users. So with an easy setup, Python (or Ruby) is a sensible choice for a first language.Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com1tag:blogger.com,1999:blog-5067904571139905755.post-76983416953643407392013-06-18T15:22:00.001-04:002013-07-20T00:27:03.727-04:00Describe Your Ideal Work EnvironmentI served on two search committees recently and <a href="http://patametadata.blogspot.com/2013/04/faculty-technology.html">blogged about</a> <a href="http://patametadata.blogspot.com/2013/04/what-i-learned-from-my-first-search.html">the experience</a>. I was struck by how tough it was to frame good interview questions. A lot of the questions we asked ended up being duds, not receiving a single response which illuminated anything about our candidates. Yet once you've asked a question, you're rather obligated to ask it of each person, for fairness' sake.<br />
<br />
On the other hand, I also recently interviewed for a position and I was asked an excellent question: "Describe your ideal work environment." Why is this so great? I think it helps both parties, the search committee and the interviewee. The interviewee's answers almost of necessity must be revealing. So much so that the committee might rule them out based upon this question alone, which really aids the interviewee: if your own ideals are so at odds with an institution's, it's better to be ruled out ahead of time than to find that out a few months after you've started.<br />
<br />
But what I really want to talk about is how I answered this question. Maybe it wasn't what the committee wanted to hear—I didn't get the position—but it felt good to articulate.<br />
<br />
<h4>
Control Over My Work Environment</h4>
<div>
<br />
Specifically, my computer. I want to run the operating system and software of my choice. Unfortunately, this is all-too-rare at most libraries and educational institutions.</div>
<blockquote class="twitter-tweet">
<a href="https://twitter.com/tararobertson">@tararobertson</a> it continues to shock the hell out of me how having admin privs to one's own work machine is somehow unusual or a bonus.<br />
— John Fink (@adr) <a href="https://twitter.com/adr/statuses/224910852455804928">July 16, 2012</a></blockquote>
<script async="" charset="utf-8" src="//platform.twitter.com/widgets.js"></script>
To be fair, I understood that there was no way I'd receive admin privileges at this position. But it's definitely a preference of mine. It's positively unproductive to limit the software available to information professionals. I do lots of development work, I have probably installed forty-plus packages on my Windows (not my first choice) machine at work. It's a waste of IT Support's time to come to my office to type in a password once a week; it's a waste of my time putting off a task because I can't install a requisite tool. I'm incredibly appreciative that <abbr title="My Place Of Work">MPOW</abbr> allows me admin privileges.<br />
<br />
Every institution should have a simple "admin quiz" one can take to receive appropriate privileges. I understand why we deny everyone by default; running an institution's computers is hard work and ensuring consistent security and software settings is a great aid. But those of us who are capable of administering our own computers, who know to run antivirus software (or just not run Windows...sorry, I'm belaboring the point) and avoid sketchy links in emails, should be given that prerogative.<br />
<br />
While I've rambled quite a bit about computers, I also like to control my office environment. Now that I have my office set up the way I like, I'm rather attached to it. I like to have a standing desk, some room for pictures on the wall, some open space. I can do without, but <a href="http://www.bartleby.com/129/">I'd prefer not to</a>.<br />
<br />
<h4>
Data-Driven Decision-making</h4>
<div>
<br />
I like to make decisions based upon data rather than my own feelings or opinions. That data doesn't have to be quantitative; I have a great appreciation for user experience research and I wish I had more time to devote to it. There's no substitute to seeing actual users perform actual tasks, whether it be searching for a peer-reviewed article or trying to find the print card vending machine.</div>
<div>
<br /></div>
<div>
This isn't a personal preference either, it's an institutional one. I love seeing data brought up in meetings, at presentations, in board meetings. It says something about an institution and its commitment to objectivity and success. Again, it's not all that common and that's understandable; collecting and analyzing data is difficult, time-consuming work. But recognizing the importance of those activities isn't.</div>
<div>
<br /></div>
<h4>
Failure is Natural</h4>
<div>
<br />
As <a href="http://patametadata.blogspot.com/2013/03/libraries-art-math-value-of-failure.html">I've covered before</a>, I have a great appreciation for failure. We cannot be successful in all our ventures and we often learn as much from the crash-and-burn projects as the epic wins. An institution that acknowledges that failure is a natural part of its own evolution is one I want to work for. I want to see presentations that not only say "gee, we really screwed up here" but also "and here's how we'll avoid the same mistakes next time." There's nothing more frustrating than seeing people cover up obvious mistakes because you just <i>know </i>that they'll be repeated in the future.</div>
<div>
<br /></div>
<h4>
That's My List</h4>
<div>
<br /></div>
<div>
or at least part of it, the main items certainly. What's yours? Is there anything in particular that libraries do well or struggle with?</div>
<div>
<br /></div>
<div>
Again, I think this question is more of a healthy exercise in articulating your own priorities rather than a wish list. I fully expect that I'll never work for an institution that gets flying colors in all three of these categories, but that doesn't mean that I shouldn't recognize my own predilections.</div>
Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0tag:blogger.com,1999:blog-5067904571139905755.post-60018344061862079872013-06-05T11:37:00.001-04:002013-07-25T13:14:27.618-04:00Teach While You're Learning YourselfThere's a (pretty reasonable) theory that the best way to learn is from the experts. They know what they're talking about, right? It makes sense, and those who have studied and worked in an area for years have valuable insights to share. They know the pitfalls, the broken assumptions, the brilliant hypotheses and they can communicate them.<br />
<br />
But the experts have their disadvantages. The fundamentals are so ingrained in them, so second nature, that they speak a different language. A technical term on their lips has an intricate history, labyrinthes of connotations. The neophyte, on the other hand, has but glimpsed the adumbrations. They've learned a term only to find out their understanding was slightly askew. Their confusion is laden with value, with the very undulation of learning. It should be harnessed while it's prime.<br /><br />
<h4>
Enough Abstraction Already</h4>
<div>
I'm engaged in a community of librarians who are steadily leveling up their technical skills. A lot of this happens in the Library Codeyear Interest Group (come to <a href="http://ala13.ala.org/node/10690">the Python Preconference</a> at ALA!), but also on the <a href="http://acrl.ala.org/techconnect/">ACRL Tech Connect</a> blog where our posts are less prophets handing down commandments than regular ol' librarians sharing their inchoate knowledge.<br />
<br />
A specific of example is the Codeyear IG's <a href="https://github.com/LibraryCodeYearIG/Codeyear-IG-Github-Project">GitHub Project</a>, which I started (though feedback from participants and <a href="http://andromedayelton.com/">Andromeda Yelton</a> has been invaluable). I started the project <i>despite being mediocre at Git and GitHub</i>. I am not a software developer. Sure, I have a deceptive number of projects on my GitHub account, but I'm thoroughly amateur and still make embarrassing mistakes.<a href="#note1" id="fn1">[1]</a> But that hasn't hindered the project's efficacy: we've had ten people complete the Getting Started tutorial and many more read the Tech Connect blog posts on it. If nothing else, it's upping the community's exposure to and understanding of awesome tools like GitHub.<br />
<br />
Part of the success of the GitHub Project, I hope, is my ability to write for beginners. Having just started using version control myself, I'm hesitant to employ Git terminology which is familiar to people coming from other VC systems but not to people new to the whole class of software. For instance, rather than write something like <q><code>git commit</code> does just what it says: <a href="https://www.kernel.org/pub/software/scm/git/docs/git-commit.html">it stores the current contents of the index in a new commit along with a log message from the user describing the changes</a></q> it's obvious that <q><a href="https://github.com/LibraryCodeYearIG/Codeyear-IG-Github-Project/blob/master/Getting%20Started/readme.mdown#fourth-step---add-your-name-to-the-list-of-people">the <code>commit</code> command finalizes our changes and adds them to the project's history</a></q> is a better explanation. But even with my valuable inexperience, I still assume familiarities that don't necessarily exist. An early participant noted that the keyboard shortcut to exit the <code>git log</code> command was never mentioned (it's the letter <kbd>q</kbd>, by the way). <em>This is precisely the sort of key detail that is lost on experienced users.</em> I'm no command line expert, but I know that <kbd>q</kbd> exits the <code><a href="https://en.wikipedia.org/wiki/Less_(Unix)">less</a></code> pager. It was a real hangup for me when I was learning, but now that I press it several times a day, it's cognitively absorbed. I forgot that it was something I had to <em>learn</em>, once upon a time.<br />
<br />
<h4>
Old News</h4>
</div>
<div>
Pedagogy has known that experts do not make great teachers for awhile. We've all heard of the move away from the "sage on stage" to the "guide on the side," which is related to the critique of top-down knowledge transmission. Other current movements, like "flipping the classroom" where lectures occur outside of class while time in the classroom is used for group projects, also come to mind. But we often fail to carry these lessons over to professional development; when you schedule conference sessions, do you look for Delphic panels like Top Tech Trends or amateur confessionals like <a href="https://groups.drupal.org/node/133534">Drupal Fail</a>? More importantly, do you stop yourself from writing to a listserv, tweeting, blogging, or proposing conference sessions because you feel too inexperienced, <a href="https://en.wikipedia.org/wiki/Impostor_syndrome">too fraudulent</a>?</div>
<div>
<br /></div>
<div>
A lot of librarianship is learning, whether it's how to teach information literacy or how to code, and we benefit as a community when everyone shares their own lessons. Go forth and edify, ye novices.</div>
<div>
<br /></div>
<h4>
Footnotes</h4>
<div id="note1">
1.<a href="#fn1">^</a> The history of <a href="https://github.com/phette23/dotfiles/">my fork of a dotfiles repo</a> has damning evidence. There's weird looking stuff if you run <code>git log --pretty=online -n 50 --graph</code> for what should be a fairly straightforward project.</div>
Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0tag:blogger.com,1999:blog-5067904571139905755.post-40980716526485957072013-05-21T09:21:00.000-04:002013-06-05T11:50:34.777-04:00Blacklisting Wikipedia & Information Literacy<p>I taught an interdisciplinary course this past semester, "The Nature of Knowledge." My co-instructor and I focused specifically on what happens to knowledge in a networked, digital environment. The course was revelatory for me, both because it was the first I've taught as a lead instructor and due to how students reacted to our content. The course is going to inspire a slew of blog posts, but I want to start with a plea to postsecondary educators:</p>
<p><strong>Your attitude towards Wikipedia is destroying students' critical thinking</strong>.</p>
<p>I say this because virtually every student in the class had heard that Wikipedia is inappropriate for academic use. And it is; <a href="https://en.wikipedia.org/wiki/Wikipedia:Academic_use">it says so itself</a>. The problem is <em>they have no idea why</em>. The most common reason proffered was "because my professors said so," the very antithesis of critical thinking.</p>
<h4 id="information-literacy-lists">Information Literacy & Lists</h4>
<p>The third bullet point in <abbr title="Association of College & Research Libraries">ACRL</abbr>'s <a href="http://www.ala.org/acrl/standards/informationliteracycompetency">information literacy competency standards</a> is "evaluate information and its sources critically." This is where assignments that blacklist or whitelist certain sources fail. Rather than equip students to analyze sources, valid sources are pre-selected and often according to arbitrary criteria. For instance:</p>
<p><strong><q>No Internet sources</q></strong> is a common theme. Even with an "except the library databases" caveat, this is at best confusing and at worst counterproductive. What about Google Scholar, <a href="https://oaister.worldcat.org/"><abbr title="Open Archives Initiative">OAI</abbr>ster</a>, <a href="http://www.scirus.com/">Scirus</a>, and all the other open access aggregators? The web is the primary delivery mechanism for scholarly knowledge. One cannot simply write it off. This is especially harmful because it trains students to ignore so many wonderful sources out there. What will they do when they don't have access to research databases? We're teaching them that the open web is useless for research when it's not.</p>
<p>Then there are the <strong>blacklists which specify Wikipedia</strong>. The issue here is the discrimination: why is Yahoo! Answers not listed? About.com? Ask.com? Conservapedia? The list goes on. It's a fruitless endeavor to delimit the poor sources from the good ones. And Wikipedia is likely singled out not because it's particularly bad but because it's so common.</p>
<p>Finally, there's the inverse approach of <strong>assignments which require peer-reviewed articles</strong>. The issue is, at least for the first few years of an undergraduate degree, peer-reviewed sources are too arcane for our students. This is less of an indictment of students' reading than academic writing, which eschews accessibility. I've heard grumbles around the librarian blogosphere about peer-reviewed article requirements (Meredith Farkas' <a href="http://meredith.wolfwater.com/wordpress/2011/10/27/i-need-three-peer-reviewed-articles-or-the-freshman-research-paper/">screed against freshman research papers</a> is a must-read) and plenty of people are critical of them. They're still all-too-common in assignments.</p>
<p>The underlying concern throughout all of these approaches is that they rarely explain <em>why</em>. Why is the web so awful, especially since many scholarly sources now appear there? Why is Wikipedia specifically worse than other sites that allow anyone to publish? What the heck is peer review and why do we care about it? I touch on all these when I teach information literacy, but half of the time I'm combating the assignment. The <abbr title="Association of College & Research Libraries">ACRL</abbr> standard is <q>evaluate information and its sources critically</q>, not <q>uncritically accept whatever unjustified stance is taken by the assignment.</q> These assignments cultivate <q>intellectual laziness</q>, to quote my co-instructor, not the skills to critically evaluate any source, regardless of where one happened to find it.</p>
<h4 id="whats-really-wrong-with-wikipedia">What's really wrong with Wikipedia?</h4>
<p>In my classes, I'll often do comparative searches across Google and a library database, then ask students to evaluate a chosen result using a metric like the <a href="http://libguides.chesapeake.edu/annotated-bib/craap">CRAAP</a> test. I distinctly recall a class when I brought up a Wikipedia article and asked which elements of the CRAAP test it failed. <q>All of them,</q> a student ventured. Nothing could be further from the truth.</p>
<p>Currency? It varies, but most Wikipedia entries are updated frequently. In fact, this is one area where Wikipedia has a structural advantage over other modes of publication: because there's such a wonderfully low barrier to participation, new information can be added as soon as it's published elsewhere. Compare this to traditional tertiary sources (especially print ones), where editorial and publishing processes delay information becoming available to end users. On the other hand, compare Wikipedia to other websites; every article has, down to the very minute, its last-updated date visible on the "<a href="https://en.wikipedia.org/wiki/Help:Page_history">View history</a>" tab. Anyone who has helped a student cite a website knows that determine the publication date is usually an exercise in futility.</p>
<p>Relevance? Wikipedia's enormous breadth virtually ensures it has something relevant to say on any topic.</p>
<p>Authority? <em>This is Wikipedia's only problem in terms of the CRAAP test</em>. We usually don't know who has contributed to any given article, it could be a credentialed academic or anyone else. Wikipedia itself dismisses authority, <a href="https://en.wikipedia.org/wiki/Wikipedia:About">stating</a> <q>What is contributed is more important than the expertise or qualifications of the contributor</q>, a provocative stance which I don't have space to explore here.</p>
<p>Accuracy? Wikipedia articles can have hundreds of references. The encyclopedia's insistence on <a href="https://en.wikipedia.org/wiki/Wikipedia:Verifiability">verifiability</a> and <a href="https://en.wikipedia.org/wiki/Wikipedia:Citations">citations</a> are cardinal strengths. In fact, the way I recommend most students use Wikipedia articles is to learn important terminology from them and mine their references. I wrote a paper in <em>graduate school</em> on net neutrality; Wikipedia was my first stop and it outlined not only the major issues but also linked directly to pertinent policies and secondary sources. Thanks largely to that excellent start, my paper earned an A.</p>
<p>Purpose? Wikipedia is admirably forthright about what it is and <a href="https://en.wikipedia.org/wiki/Wikipedia:What_Wikipedia_is_not">is not</a>. Its goals are noble (as evidence by its enlightened <a href="https://en.wikipedia.org/wiki/Wikipedia:Five_pillars">five pillars</a>), especially relative to for-profit alternatives like About.com which display ads and lack external references.</p>
<p>Yet no one had walked my students through this kind of analysis. No one had shown them the "View history" tab of an article, or its references section, or any of Wikipedia's fundamental policies. <em>Why</em> Wikipedia is a non-academic source was always left as an exercise for the reader.</p>
<h4 id="anyone-can-edit">"Anyone Can Edit"</h4>
<p>It's worth investigating the "anyone can edit" argument further, because it appears to be the main objection to Wikipedia.</p>
<p>First of all, it is not strictly true that anyone can edit any article at any time. Certain articles are <a href="https://en.wikipedia.org/wiki/Wikipedia:Protected_page">protected</a> and can only be edited by a subset of editors, such as administrators or confirmed accounts. These articles tend to be common targets for vandalism. They form a small minority of articles.</p>
<p>Secondly, as my students found out, wiki markup is nontrivial. It takes some familiarity before an editor can do anything other than add unformatted text. This was a large obstacle for most of my students; despite an exercise introducing them to HTML using Codecademy early on, many struggled to understand more complex markup structures such as references and links. It seems unlikely that someone would invest a great deal of time learning wiki markup only to write nonsense into articles. Most would take the time to learn editorial guidelines as well as markup, which we did in our class by reading guidelines and Joseph Reagle's <cite><a href="http://reagle.org/joseph/2010/gfc/">Good Faith Collaboration</a></cite>.</p>
<p>Thirdly, the "anyone can edit" objection often refers to vandalism more so than biased or inaccurate writing. The problem with this argument is...how often do you see actual vandalism on Wikipedia? Even the hypocrites who ban Wikipedia have likely read dozens if not hundreds of articles. I've probably read thousands myself but I've only ever seen vandalism once, which is largely <em>due to the fact that anyone can edit</em>. Anyone who spots vandalism can easily remove it and Wikipedia also employs bots to detect and delete vandalism. I made <a href="http://youtu.be/LZg1DVE_tYg">a brief video</a> that covers the points in this paragraph, showing <a href="https://en.wikipedia.org/w/index.php?title=Internet_privacy&diff=535879631&oldid=535879612">an example of vandalism</a> that was reverted <em>within a minute</em>.</p>
<p>Finally, and most importantly, "anyone can edit" does not equate to "anyone writes anything they want." Wikipedia has standards which are enforced by an editorial community. It is not an open forum for any kind of discourse, it's an open encyclopedia written from a neutral point-of-view. Yes, there are articles which are inaccurate, biased, or incomplete. But they're not the product of a million monkeys hammering away on laptops, they're deliberate steps towards a better and more encyclopedic article.</p>
<h4 id="whats-really-great-about-wikipedia">What's really great about Wikipedia?</h4>
<p>Wikipedia has a few advantages over traditional research sources, such as the widely distributed editorship, the speed with which articles can be updated, its strong community norms, and the bots which automate low-level tasks like reverting vandalism. But there has always been one thing that stands out about Wikipedia to me: <em>it is the only source which warns you of its own inadequacies</em>. From inline <q>citation needed</q> and <q><a href="https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style/Words_to_watch#Unsupported_attributions">weasel words</a></q> warnings, to colored boxes up top (<q>unencyclopedic</q>, <q>doesn't represent a worldwide view</q>, <q>personal reflection or opinion</q>, <q>uses out-of-date sources</q>...the sheer variety of these indictments speaks to just how high the encylopedia's standards are, and how often they're not met); Wikipedia wants you to know it's imperfect. <q>Users should be aware that not all articles are of encyclopedic quality from the start: they may contain false or debatable information.</q></p>
<p><strong>No one else does this</strong>. Not About.com, not Britannica, not <a href="http://www.thecrimson.com/article/2013/4/24/rogoff-error-defense/">brilliant economists</a> who make errors in their Excel spreadsheets. A source detailing its own issues is virtually unheard of and can only come about in a community like Wikipedia, where numerous editors representing diverse viewpoints constantly enforce a set of stringent standards.</p>
<p>To bring this back to assignment structure, it provides instructors with an easy criterion, too. If you must blacklist Wikipedia articles, how about starting with the ones that have issues identified by alert boxes? While this doesn't challenge students to analyze sources by themselves, it at least tells them <em>why</em> a particular article is unusable.</p>
<h4 id="scaffolding">Scaffolding</h4>
<p>I'll readily admit; I oversimplify concepts in instruction sessions all the time. It's productive to create a foundation of a few artificial givens upon which students can build. Then, in a later course, those assumptions can be examined and problematized. So, to some extent, black- or whitelists of sources are useful, they wean students off of poor sources until students can analyze them on their own. Scaffolding is tricky and I certainly haven't mastered it yet.</p>
<p>However, college is the time when we should be examining students' perceptions surrounding Wikipedia. The Wikipedia ban is a high school scaffold; it needs to be torn down in the first two years of college. Students can benefit from using Wikipedia articles appropriately, from understanding tertiary sources, from thinking critically about the sorts of issues that pop up in alert boxes at the top of questionable articles. If nothing else, the heresy of crowdsourcing—that a mass of amateurs can produce information as good as or even better than a handful of experts—must be taught. It's too important to today's information economies to be overlooked.</p>Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com1tag:blogger.com,1999:blog-5067904571139905755.post-29023697573208634512013-04-18T11:00:00.000-04:002013-04-18T11:00:06.131-04:00Faculty & Technology<p>This is a continuation of my <a href="http://patametadata.blogspot.com/2013/04/what-i-learned-from-my-first-search.html">First Search Committee</a> post, largely inspired by respondents answers to questions about technology. I broke it into a second post for several reasons: there were no good responses to technology questions, the first one was already pretty long, & I have a specific interest in educational technology. It's not merely that I'm a library technologist, it's also that I serve on a distance learning committee that exposes me to a lot of major issues with the way we deliver education online. When we framed what we wanted in a candidate, experience teaching online was one of our primary attributes. Despite receiving numerous, well-qualified applicants, this was the one area where we couldn't match our desired qualifications.</p>
<h4 id="bad-interview-responses">Bad Interview Responses</h4>
<p><q>Students love watching videos! I use lots of YouTubes.</q></p>
<p>I asked a question about online tech usage during the phone interviews and <em>no one</em> gave a convincing answer. The worst were neo-Luddite and the best contented themselves to list a series of proper nouns as if that demonstrates technical competence: BlackBoard, WebCT, YouTube, PowerPoint. PowerPoint is never a great answer to <em>any</em> question, but it's particularly bad answer to a question about <em>online technology</em>. What I really wanted was an honest, critical opinion of tech. Tell me <em>how</em> you use it and <em>why</em>, not <em>what</em> you use. Your specific tools are time-sensitive and prone to making you look foolish if you name something antiquated. If you have <q>proficient in WordPerfect</q> on your CV, now would be the time to remove it.
Honestly, my solitary question about online technology was easily the most troublesome part of the hiring process. Many instructors are comfortable using technology but few seem thrilled about it or possessed of any rudimentary understanding.</p>
<p><q>I love technology, but Twitter/texting is ruining my students writing.</q></p>
<p>Really? That's interesting, do you have any longitudinal data to share? I assume you ran a multi-year study, comparing students who use Twitter to a control group who do not, to come to this conclusion. It's a bit controversial, because <a href="http://www.kdp.org/publications/theeducationalforum/pdf/TEF764_Greenhow_Gleason%20(2).pdf">virtually</a> <a href="http://www.ncbi.nlm.nih.gov/pubmed/19972666">every</a> <a href="http://www.cblt.soton.ac.uk/multimedia/PDFsMM09/m-Learning%20An%20experiment%20using%20SMS%20to%20support%20learning%20new%20English%20language%20words.pdf">piece</a> of <a href="http://jlr.sagepub.com/content/41/1/46.full.pdf">research</a> <a href="http://www.siu-voss.net/Plester__txt_msg_in_school.pdf">on</a> <a href="http://www.siu-voss.net/Voslo__effects_of_texting_on_literacy.pdf">this</a> <a href="http://en.wikipedia.org/wiki/Txtng:_the_Gr8_Db8">subject</a> disagrees with you: the more someone reads and writes, the better they are at reading and writing. Thanks largely to the ubiquity of cell phones, students are reading and writing more today than they ever have in the past. Those students who write poorly today may have been near illiterate without the added practice of texting or tweeting.
Secondly, consider that one of the faculty members you're talking to might be hella into Twitter. I identified myself as the Emerging Technologies Librarian before asking my question; anyone with any familiarity with Twitter probably knows it's popular among those in the tech scene. When you say "Twitter is turning my students into idiots," the snarky response that pops into my head is: I use Twitter, do you think I'm an idiot?</p>
<p><q>Students are good with technology! <em>They</em> show <em>me</em> how to do things!</q></p>
<p>I like the admission that one learns from one's students. I don't mean to indict that. But the blanket generalization that all students are good with technology is not only false, it's damaging. I know these faculty members. They're the ones who ask students to make a chart in Excel without giving any instruction. They ask students to make a video presentation without knowing how to do it themselves. And when the students become frustrated and get stuck, they come to the library, where we patiently try to guess what the faculty member wanted and assist the student in completing the assignment. That's a big part of my job and I'm not complaining about the helping part; I love it. I'm complaining about the poorly written assignment that assumed a skill base that didn't exist.</p>
<h4 id="the-myth-of-the-digital-native">The Myth of the Digital Native</h4>
<p>The assumption that students are skilled and comfortable with technology belies a much more disconcerting issue: <a href="http://www.scribd.com/doc/9775892/Digital-Native">the myth of the digital native</a> is alive and well in academia. The myth, for those who are unfamiliar, is basically <q>the kids these days are so good with computers.</q> It's an assumption that growing up today, our younger students are so inundated with technology that they somehow magically glean a deeper understanding of it than prior generations. The fact is, many of our younger students know how to log onto Facebook, send a text message, and little else. If you ask them what web browser they use, they will say <q>Google.</q> And they don't mean Chrome, they mean Google. They can't differentiate between the address bar and the search box in Internet Explorer 8. Many have a cursory understanding of the use of some pieces of tech and but no conceptual grasp of the larger edifices of the web and computer operating systems.</p>
<p>To be clear: <em>some</em> students obviously do understand tech, the issue is when we assume they all do.</p>
<p>At a community college, the digital native assumption is even more problematic. When you say <q>students are good with technology,</q> meaning that the younger generation is, what I hear is <q>I don't understand that many of my students will be adults, some doubtless older than I am.</q> We have a lot of adults returning to higher ed. For many of them, calculators were the only computing device involved in their prior education. Now we ask them to understand the bloated behemoth that is a Learning Management System, to juggle several different accounts, to manage at least two emails, to complete assignments using specific software packages (e.g. PowerPoint). It's a major struggle for many of them; again, I know because I end up helping them in the library. A faculty member simply assuming technical competence is severely damaging their ability to deliver effective instruction.</p>
<h4 id="where-do-we-go-from-here">Where Do We Go From Here</h4>
<p>I don't have a solution to the problems I've raised. As a job applicant, I would avoid naming specific software, instead describing the broader category to which they belong (e.g. word processing, presentation). I would also take some time to think critically about how you incorporate technology into instruction. Do you use it to increase collaboration? To make instruction less top-down & more interactive? Or are you simply showing amusing YouTube videos because the students seem to like them? There's a vast gulf between using technology and using it effectively.</p>
<p>Finally, I want to impress one hopeless plea upon the graduate schools of the world: offer—ideally require, but I'll settle for offer—an instructional technology class for all disciplines that covers the basics of technology, its technical underpinnings, how to use it, and finally how it can fit successfully into different pedagogical strategies. As a devotee of two-year, teaching-focused institutions, I already think it's tragic that most faculty members don't receive any teaching training. They become brilliant researchers and writers, but they're mostly left to their own devices when it comes to teaching. Knowledge of educational technology falls by the wayside as a consequence. Its use can be learned on the job but I wish grad schools would do more in this area.</p>Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0tag:blogger.com,1999:blog-5067904571139905755.post-39561118927919466332013-04-01T06:19:00.000-04:002013-04-01T06:19:25.830-04:00What I Learned from My First Search Committee<p>I was recently on my first search committee for a full-time faculty position at my community college. I was excited to see academic hiring from the other side and, sure enough, I learned much about the process. Below, I express my particular preferences for the enlightenment of the job-searching public. These views do not represent those of my institution or my peers on the search committee (indeed, some would doubtless disagree with certain contentions).</p>
<p>I will write a second post focusing on technology, because I found our applicants' ideas about technology particularly underwhelming.</p>
<h4 id="the-good">The Good</h4>
<p><q>Included is my teaching philosophy.</q><br />
Oh, we didn't ask for a statement of teaching philosophy? I don't care. It was great to read these statements. The mere fact that an applicant sent an unsolicited pedagogical statement meant that they'd thought about teaching as a discipline. It indicated a degree of rigor and dedication.</p>
<p><q>For instance, in my 101 class I have my students do...</q><br />
Yes, a concrete example! It's very easy to say that you're a great teacher, you care about students, you employ multimedia, you know technology, you appeal to variegated learning styles. There, I just did it. Obviously, I'm the best candidate! No, the people who stand out give concrete examples that <em>show</em> me that they know their stuff. They don't say "proficient in Microsoft Office" they say "I hold Skype office hours." They give assignments, lectures, and media right there in the cover or add further attachments.</p>
<p><q>Attached are my student reviews.</q><br />
It surprised me to see student reviews included. While it's true that, if you've been teaching a while, it's trivial to pick out the two semesters you happened to receive ace reviews. But I do like seeing reviews, be they from students or peers. It shows that the faculty member kept the reviews, cares about them, thinks that they speak well of their teaching. That alone is important.</p>
<p><q>Developmental and adult education...</q><br />
These are the people that really get it. While many applications mentioned diversity, a surprising few actually singled out their experience with developmental education and non-traditional students. When you mention these issues it tells me that not only have you worked at a community college but you paid attention to the institution's foremost issues and programs. How to effectively deliver developmental education is a <em>huge</em> dilemma for us. Even if we're hiring for a position that will never teach developmental classes, the faculty member <em>will</em> teach students either in or recently out of developmental ed.</p>
<p><em>Honesty</em><br />
Many interviewees were clearly honest; they said things that undoubtedly were admissions of weakness, or humanity, basically anything that made them out to be something other than a robot sent from the future to instruct students to death. You're nervous about the interview? That's perfectly normal and we're not hiring you to sit through job interviews, we want to know what kind of teacher you are. You're a woman and a mother, first and foremost, and a faculty member second? Well, that's great, those are logical priorities. The value of honesty isn't in the statements themselves but the trust that it builds with the hiring committee.</p>
<p>I also liked mentions of service learning or flipping the classroom, topics which have come up recently amongst our faculty. It shows that the applicant is aware of some of the same teaching approaches that we utilize.</p>
<h4 id="the-bad">The Bad</h4>
<p><q>I'm available <abbr title="As Soon As Possible">ASAP</abbr>.</q><br />
I've written this in cover letters because it sounds like a plus. In truth, if we're worried about your start date, that worry comes much later. Say you're available to start immediately during your phone interview or especially once you've been called to campus. But anything prior to that is both irrelevant and makes us think that you're desperate, perhaps have been rejected by other places for reasons we haven't discerned yet.</p>
<p><q>My research focus is...</q><br />
Under certain circumstances, & when written in a concise manner, research interests can work well in a cover letter written to a community college. But applicants should understand that I want to know first & foremost what kind of teacher you are. Research is not a part of our mission, period. The easiest way to rule out most candidates was when their cover & <abbr title="Curriculum Vitae">CV</abbr> went to great lengths to demonstrate their research prowess to the detriment of teaching. If you're a respected researcher with plenty of publications & presentations, by all means add that to your <abbr title="Curriculum Vitae">CV</abbr>. But if all you can say about teaching is "yeah I really love it" then your application immediately drops to the bottom of the pile.</p>
<p><q>I taught graduate level quantitative analysis, a seminar on Michel Foucault's notion of transgressive dissimulation in relation to the liminal corporeality of modernity...</q><br />
OK, so you've taught a bunch of things that will never be in our curricula, cool. I guess I can just skip over this section. It becomes even more worrisome if that looks like the <em>only</em> thing you teach because now I'm concerned that you'll be bored—or worse, think it beneath you—when teaching introductory level courses exclusively. And while this search committee wasn't in my area of expertise, I'm quite familiar with theory. If I can't follow your course's theme or what you say your research interests are, I worry our students won't either.</p>
<h4 id="the-ugly">The Ugly</h4>
<p><q>Your esteemed institution</q><br />
Look, no offense to <abbr title="My Place of Work">MPOW</abbr> which I truly love, but no one esteems us. We're not Harvard and, more importantly, we're not trying to be Harvard. Institutional prestige is meaningless to us. We're in the business of teaching students, of moving them along to prestigious institutions. We don't need the credit. And the fact that you failed to name our institution indicates you sent this same cover to a dozen other schools.</p>
<p><q>I taught X at Y, Z at A, B at C, D at E, F at G, H at I..</q><br />
Yes, people actually wrote this in their cover. The laundry list approach shows you're clueless and perhaps haven't read your own cover letter aloud. Lengthy lists are unpersuasive, particularly when you relate zero details about each appointment (oh boy, do we get to be another bullet point on your list?!?). Furthermore, it wastes an incredible amount of space in the cover when I'm searching for persuasive narrative. These details are entirely redundant with your <abbr title="Curriculum Vitae">CV</abbr>; let the <abbr title="Curriculum Vitae">CV</abbr> do that, the cover is the time to tell me about <em>who you are</em> and <em>why we want you</em>.</p>
<p><em>Tiny font, no line-spacing, letters over a page & a half long</em><br />
All of these show me that you don't respect the hiring process. We're reading a lot of letters. I would love to devote an infinite amount of time to carefully considering each applicant but the truth is the reason why there is a conventional cover letter length is that time is finite. No one gets extra time or space. Have a lot to say? Include an optional attachment that I reserve the right to ignore or only consult if you make it to the next round of consideration. But making your cover 8pt font with no line spacing is a sophomoric trick which fools no one. An unfortunate majority of faculty don't trust students and thus require a particular font and spacing; do you really think they won't spot the opposite trick coming from you?</p>
<h4 id="conclusion">Conclusion</h4>
<p>After reading dozens of job applications, many of the points above were obvious. But they weren't all on my mind when I was submitting job applications. I probably included laundry lists, I said I was available ASAP, and I didn't include a teaching philosophy (which, as it happens, was never required for any of the positions I applied to). The point is: none of this is obvious. I hope someone reads it and finds it helpful.</p>Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0tag:blogger.com,1999:blog-5067904571139905755.post-56298729632482787272013-03-21T10:30:00.000-04:002013-03-21T10:31:48.563-04:00Libraries, Art, Math, & the Value of Failure<p>I am going to start in one place & end up somewhere else. Ready? Here we go.</p>
<h4 id="failure">Failure</h4>
<p>There's a wonderful trend lately at library conferences of promoting open dialog around failure. I was first acquainted with this at the <a href="https://twitter.com/search?q=%23drupalfail">#drupalfail</a> sessions held by <a href="http://www.ala.org/lita/about/igs/drupal/lit-igdrupal">LITA's Drupal Interest Group</a>. Presenters would detail the various ways their projects crashed & burned, or merely did not meet expectations. With Drupal, this is particularly easy: it's a complex <abbv title="Content Management System">CMS</abbv>, as powerful as it is enigmatic, & you have to be fairly experienced to successfully plan & implement a project with no hiccups along the way.</p>
<p><strong>Related news flash</strong>: most librarians don't learn Drupal in library school, it's something they learn on the job, so there's a lot of intermediate failures before anyone gets close to something that vaguely resembles success. But Drupal isn't the only example of this trend: I <a href="https://cynng.wordpress.com/2013/02/11/code4lib-pre-conference-fail4lib/">heard</a> <a href="http://eduiconf.org/2013/03/18/four-big-ideas-from-code4lib-2013/">good things</a> about Code4Lib's <a href="http://wiki.code4lib.org/index.php/2013_preconference_proposals#Fail4lib">"Fail4Lib" preconference</a>. There have been scattered talks elsewhere discussing the need to create a culture where taking risks & occasionally failing is a welcome. It's certainly a necessary element of innovation.</p>
<p>What stands out about these failure sessions? They're useful. Knowing someone else's mistakes saves you immense amounts of time & often all you have to do is <em>avoid something stupid</em> to gain from it. As a technologist at a small library, I'm constantly bombarded with awesome things I can't use: they require money, or staff, or time, or expertise, or scale that we just don't have. It's cool to hear about Linked Data & Near-Field Communication; it's not something that would be a wise investment on my part. But when I hear someone say that creating a custom theme from scratch in Drupal is a waste of time relative to using a pre-built theme, I instantly am more prepared to do my job. Don't reinvent the wheel with theming, check. Lesson learned, time saved.</p>
<h4 id="art">Art</h4>
<p>When you consider the pedagogical value of failure, some weird issues arise. I had a unique undergraduate career in that I was trained in both the humanities (English) & formal sciences (Mathematics). You know what both of those fields happen to be utterly terrible at? Teaching failure. In math, when a theorem is superseded, it's simply not taught anymore. It might as well have never existed. I never had homework problems phrased "spot the problem with this theorem" or "hey Fermat was a dummy, can you tell why?" Mathematics ignores an entire mode of analysis. You become skilled at deductive reasoning & constructing your own theorem cabins from axiom Lincoln Logs; you never learn how to approach someone else's theoretical edifice other than simply assuming it's true because it's in the textbook.</p>
<p>English is also awful at admitting failure, in its own warped way. We read the classics, but not the failed classics. There are at least two kinds of failed classics: works which were highly regarded in their own time but grew irrelevant & works which were never highly regarded in any time. Either way, why are the canonical works more valued than the telling failures of their contemporaries? While Mathematics education's failure to teach non-deductive modes of logic is troubling, artistic prejudices are even more disturbing.</p>
<p>Everyone has, by now, heard "beauty is in the eye of the beholder." The aesthetic disciplines cling to this maxim as if it somehow places them outside the realm of objective inquiry, paradoxically able to pass judgment without recourse to supporting evidence. If this is true, why do we read Shakespeare? <a href="#fn1" id="note1">[1]</a> Wouldn't any arbitrarily chosen text suffice, given that the text itself is irrelevant, it's the Beholder that matters? Of course, what you learn in English is that—to paraphrase George Orwell's <em>Animal Farm</em>—some Beholders are more equal than others. Your professors are Beholders, you as a student are but a Beholder-in-training, & art works are unimpeachable: they do not fail, they either go unmentioned or become canonical via mysterious means. Aesthetics masquerades as subjective judgment while never admitting its own folly or interrogating the social conditions that cause certain works to become canonical while others are summarily discarded.</p>
<h4 id="practicality">Practicality</h4>
<p>Doubtless there are objections that I'm conflating disparate fields. Mathematics is axiomatic logic, English is aesthetics, & the specific vein of librarianship I've mentioned is quite practical. These are library <em>projects</em> that failed & perhaps we cannot say a work of art or a theorem fails in any corresponding sense.</p>
<p>But don't give up on me so easily. Librarians are onto something here. We know that art fails. We don't purchase every book, we write harsh <a href="http://goodreads.com">Goodreads</a> reviews about books that didn't please our Beholder's eye. My earlier examples were from library technology events, events that skirt around the practice of programming if not engage it directly. & what is a failed program, or a bug in an algorithm, if not a flawed theorem? There are connections & they might even be meaningful. Otherwise I'm just way off base, a deranged squirrel collecting copper washers for the winter. I never was good at aesthetics as theory. I probably should avoid writing about it. But I'm pretty good at failing & perhaps I'll write more about that.</p>
<h5 id="footnotes">Footnotes</h5>
<p><span id="fn1">[1]<a href="#note1">^</a></span> The thing about Shakespeare: he's not a very good writer. He has flaws & they're the sort creative writing teachers spell out in red sharpie at the end of student plays: "Heavy handed." "Deus ex machina." "Did you really back yourself into such a corner that the only way out is to kill every single character for which the audience has a shred of empathy left? Please, go back to the drawing board." I'm still bitter about Shakespeare, a sole author, being a <em>required</em> course for my undergrad degree, which utterly ignored the entire 20th century.</p>Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0tag:blogger.com,1999:blog-5067904571139905755.post-59336705437119519162013-02-22T09:29:00.002-05:002013-02-22T09:29:41.496-05:00Reflections on Writing JavaScript<p>I've been working with JavaScript for a little while now & I want to briefly share changes I've made in my coding style. These changes, while seemingly pedantic, can be very meaningful in constructing a maintainable script.</p>
<h3 id="use-anonymous-functions-sparingly">Use Anonymous Functions Sparingly</h3>
<p>When I first started writing semi-serious JavaScript using jQuery, I was passing anonymous functions as parameters frequently. It's a pattern that's condoned by <a href="http://www.codecademy.com">Codecademy</a> & all the brief jQuery <abbr title="Application Programming Interface">API</abbr> examples, but it gets messy & unsustainable quickly. Throwing anonymous functions around all the time misses the entire point of <em>functions</em>, i.e. that they're named, reusable chunks of code. What's clearer here:</p>
<pre><code>// anonymous
$.getJSON( "http://some.api.url/gimmejson", { q: "search+term" } , function ( response ) {
var len = response.len;
if ( len > 0 ) {
console.log( "Well, at least it's not empty..." );
} else {
return "ERROR ERROR DEATH FATAL ERROR";
}
var dataset = [];
for ( var i = 0; i < len; i++ ) {
dataset.push( response.items[ i ].text );
}
return dataset; },
);
// named
$.get( "http://some.api.url/gimme.json", processResponse );
</code></pre>
<p>Having ten lines of anonymous function pasted into a function call as a parameter is probably the least readable code pattern commonly in use. In particular, if other parameters also span multiple lines (e.g. if I pass a much larger object in the second parameter above) it is a chore to differentiate between commas that separate items within objects & arrays & the commas that separate the parameters you're passing. Debugging is also easier with named functions; you can look back through a call stack that makes sense, rather than discovering that the last function called before an error was but one of the twelve anonymous ones sprinkled throughout your code.</p>
<p>The one disadvantage is that it's not immediately evident that processResponse is a function; it looks like it could be any type of variable. That's why the best, most readable way to use most functions is by passing parameters in an object, which jQuery makes extensive use of:</p>
<pre><code>// passed in an object
$.ajax( {
url: "http://some.api.url/gimme?json=yesyesyes",
dataType: "json",
data: { q: "search+term" },
success: processResponse,
error: displayError
} );
</code></pre>
<p>This makes the role of processResponse much clearer; it's a callback function called upon a successful request. If the <code>$.getJSON</code> function let me pass in both a success & an error callback, I'd have to look up the function's syntax every time just to figure out which anonymous function was assigned to each. With the object parameter, their roles are doubly evident both from the name of their key as well as the name I've given the function.</p>
<h3 id="and">&& and ||</h3>
<p>&& and || are frequently used in assignment expressions, while intuitively they only belong inside comparison expressions. It's not something I do a lot but it's incredibly frequent in code libraries so understanding its usage is important. Basically, && and || are not merely comparison operators; they are expressions which return a value. && returns the first value if it is falsey & the second if the first is truthy; || is the opposite in that it returns the second value if the first is falsey & the first if it is truthy. You can see how this works in typical comparisons, where && is used to mean "and" & || is used to mean "or". Example:</p>
<pre><code>if ( false && true ) // -> false because 1st is falsey, code won't execute
if ( false || true ) // -> true because 2nd is truthy, code will execute
</code></pre>
<p>We know intuitively that these make sense, because "and" usage means that both the first <em>and</em> the second conditions must be true while "or" usage is happy if either the first <em>or</em> the second is true. But what do you think this code, taken from the Google Analytics snippet, does?</p>
<pre><code>var _gaq = _gaq || {};
</code></pre>
<p>Does it make sense to have a || outside of a conditional statement such as if? Here, || returns _gaq if _gaq is truthy (e.g. if it exists) but it will return an empty object literal if _gaq is falsey. Then, later on in my code, if I add a method or property to _gaq I've guaranteed that it exists so I won't receive a reference error. So a more verbose but less tricksy rewriting is:</p>
<pre><code>if ( _gaq !== undefined ) {
_gaq = _gaq;
} else {
_gaq = {};
}
</code></pre>
<p>Writing one line as opposed to five makes sense; an if-else condition is overkill here, when we just want to check if our object already exists & initialize it as empty if not.</p>
<h3 id="spaces">Spaces</h3>
<p>Spaces are good. I like an abundance of spaces in my code. I pad array brackets, object curly braces, & parentheses wrapped around control flow expressions or function parameters with spaces. So I write</p>
<pre><code>var obj = {
nums = [ "one", 2, three ],
funk: function ( param ) {
if ( param.toLowerCase() === 'parliament' ) {
return 'Give up the funk.';
}
}
};
</code></pre>
<p>instead of</p>
<pre><code>var obj = {nums=["one", 2, three],
funk:function(param){
if (param.toLowerCase() === 'parliament') return 'Give up the funk.';
}};
</code></pre>
<p>One telling space is the parentheses that wrap a function's parameters. I try to always put a space in between the term <code>function</code> & the parameters in a function definition, while there's no space when the function is being executed.</p>
<pre><code>var funk = function ( args ) { ... } // function assigned to variable
funkyFunk function ( args ) { ... } // function declaration
funk(); // function being executed.
</code></pre>
<p>Functions are thrown around so frequently in JavaScript that this subtle difference, if consistently enforced, can go a long ways towards helping you read whether a piece of code is being executed or defined for later use.</p>
<h3 id="switch">Switch</h3>
<p>I generally avoid the <code>switch</code> statement; its syntax is weird. I find it uncharacteristic that the code blocks following "case foo" aren't wrapped in curly braces. If I had to guess how a switch statement would be done, the cases would look more like:</p>
<pre><code>switch ( foo ) {
case ( bar ) {
doSomething();
break;
}
case ( bah ) {
doSomethingElse();
break;
}
}
</code></pre>
<p>which parallels the control flow operators. <code>switch</code> doesn't save much space over a series of if comparisons & carries the potential hazard of <a href="http://www.yuiblog.com/blog/2007/04/25/id-rather-switch-than-fight/">unintentional fallthrough</a>.</p>
<h3 id="and-2">++ and ?</h3>
<p>I follow a lot of <a href="http://www.worldcat.org/title/javascript-the-good-parts/oclc/767497960">Douglas Crockford's advice</a>, but not his avoidance of <code>++</code>. I use <code>++</code> in <code>for</code> or <code>while</code> loops & it hasn't come back to bite me. Sometimes I'll use it to increment a value outside of a loop. I think I understand its usage in these limited contexts & while it isn't a huge gain in terms of saving space, it's nice to put all my loop details in one expression. I also don't think the ternary operator is worth avoiding; it's very handy during variable initialization even if it's a little opaque, much like || and &&. The ternary operator looks like:</p>
<pre><code>var someVariable = ( expression ) ? "value if expression evaluates to true" : "value if expression evaluates to false";
</code></pre>
<p>We could rewrite the Google Analytics code:</p>
<pre><code>var _gaq = ( _gaq ) ? _gaq : {};
</code></pre>
<p>It does the exact same thing; check to see if <code>_gaq</code> exists, initialize it to an empty object literal if not.</p>
<h3 id="you-dont-hate-javascript-you-hate-the-abbr-titledocument-object-modeldomabbr">You Don't Hate JavaScript, You Hate the <abbr title="Document Object Model">DOM</abbr></h3>
<p>I, as many JavaScript programmers before me, have discovered that JavaScript is really not so bad a language. It has its peculiar errors—the extreme <a href="http://javascript.crockford.com/remedial.html">unreliability of <code>typeof</code></a> & the leading zero <a href="http://stackoverflow.com/questions/8763396/javascript-parseint-with-leading-zeros">issue with <code>parseInt</code></a> come to mind—but it also has gorgeous features. In particular, the <a href="https://en.wikipedia.org/wiki/First-class_function">first-class</a> nature of functions is wonderful & I can't live without it. Passing functions <em>as parameters</em> to other functions is mind-blowing once you realize how much you can achieve with it.</p>
<p>But JavaScript's biggest issue isn't the language itself, it's the way it interacts with <abbr title="HyperText Markup Language">HTML</abbr> pages via the <abbr title="Document Object Model">DOM</abbr>. <abbr title="Document Object Model">DOM</abbr> manipulation is tough, the commands are verbose, & cross-browser incompatibilities abound. There's a reason why people love <a href="http://jquery.com">jQuery</a>; it removes the pain of accessing & altering the <abbr title="Document Object Model">DOM</abbr>, scaffolding on top of <abbr title="Cascading Style Sheets">CSS</abbr> selectors that most web developers already know. The one biggest piece of advice I give to people who want to learn JavaScript is to start with jQuery. With a nice layer of abstraction, you can actually <em>do</em> something on a website which is amazingly gratifying. The building blocks of the language are easier to acquire when you see their utility on the web, as opposed to repeatedly printing text to the console.</p>
<h3 id="conclusion-steps-to-learning-a-language">Conclusion: Steps to Learning a Language</h3>
<p>There are a few steps you go through when learning a programming language. The very first step is simply understanding what syntax is valid. Writing <code>echo "Hello world!"</code> will result in an error in JavaScript. The next step is understanding the advantages of specific syntax choices; knowing whether a particular situation calls for a a particular control flow operator, for instance.</p>
<p>The next step after that is meaningless in terms of how the code executes but of paramount importance to programmers, who tend to be human; knowing how to write clear code. Once I had the basics out of the way, I found myself having lots of opinions on what makes a piece of JavaScript understandable. Now, every time I go back & look at something I wrote previously, I find myself employing all sorts of conventions (spaces! fewer anonymous functions!) that I've discovered or come to appreciate. While much of <em>JavaScript: The Good Parts</em> went over my head initially, I now understand its essence; that deliberate choices when writing JavaScript can not only avoid common programming pitfalls but increase clarity.</p>
Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0tag:blogger.com,1999:blog-5067904571139905755.post-15934622281064240202013-02-15T13:49:00.000-05:002013-02-15T13:49:09.228-05:00Optimizing IIS for Performance & Security<p>My college uses Microsoft's IIS 7 for its servers instead of the more common Apache. That's fine; IIS is probably a good server. I don't know, I'm not qualified to say which is better. But one thing's for sure: Apache is a easier to use & learn simply because of the availability of documentation. If you're a full stack web person starting a new project, please use something with community support & documentation. Apache plays nice with <a href="http://drupal.org">Drupal</a>, there's tons of security & performance tweaks documented online, & it has <a href="http://httpd.apache.org/docs/2.0/mod/mod_cache.html" title="mod_cache">some</a> <a href="https://code.google.com/p/modpagespeed/" title="mod_pagespeed">great</a> <a href="http://www.trickytools.com/php/mod_benchmark.php" title="mod_benchmark">add-ons</a> for any situation.</p>
<p>But hey, I'm stuck with IIS. This post is mostly a note-to-self on how to optimize IIS. I'm not at all a server configuration expert, so please don't take it as gospel. Most especially, if I'm flat-out wrong about something, I'd like to hear about it.</p>
<p>For the tl;dr & the resulting file, see my <a href="https://github.com/phette23/iis7-web-config">web.config</a> github repo.</p>
<h3 id="caching">Caching</h3>
<p>The hardest part is caching correctly. The goal is to use far-future expires headers, similar to <code>Cache-Control: max-age=9000000</code>. There are many different means of caching in HTTP but far-future expires is both simple (the server just says "hey, you can hang onto this content for X seconds") & effective. Some other caching methods end up sending "conditional get" requests, essentially saying "hey, server, I have version 3.2 of this file, is that current?" & the server sends a response back saying either "yup, carry on" or "nope, here's the current version." That is slightly less error-prone, because you can update a file on the server & it'll still make its way to clients that have cached the content, but that extra HTTP request adds up quickly. To update files with max-age or other far-future type caching schemes, I use filename-based versioning, essentially bumping a version number like "style.1.css" to "style.2.css" every time I change a file. Because remembering to change filenames is tedious, I either have a CMS (Drupal's built-in caching) or a build script (<a href="http://yeoman.io">Yeoman</a>) handle it for me.</p>
<p>In IIS 7, unfortunately, it looks like you can either set static content caching on or off with little in between (Apache lets you specify expires time by <a href="https://en.wikipedia.org/wiki/Internet_media_type">MIME type</a>). If there's a particular static MIME type that you don't want to get cached, too bad. That's problematic for at least two types: text/html & text/cache-manifest. These are both static, text types but the files need to be able to change <em>without changing their name</em>. If you altered your HTML file's name every time it changed, you'd constantly break incoming links. The <a href="http://www.html5rocks.com/en/tutorials/appcache/beginner/">appcache</a> can't change because it causes this weird loop wherein clients that have previously visited the site & primed their cache can never get an updated version because they always looks in the wrong place; Jake Archibald covers this brilliantly in <a href="https://speakerdeck.com/jaffathecake/application-cache-douchebag?slide=35" title="Don't far-future cache the manifest!">Appcache Douchebag</a>.</p>
<p>So to get around this conundrum, I use two layers of web.config files: in the site's root where HTML, server-side scripts, & the cache manifest reside I use a config with no caching whatsoever, that's <code><clientCache cacheControlMode="DisableCache" /></code>. Then, in any subdirectory where static content (images, CSS, JavaScript, fonts, etc.) might reside, I override that setting with an aggressive, far-future expires header.</p>
<p>Finally, I remove ETags with a two-part rule. The HTML5 Boilerplate server configs botch this horribly, ruining the X-UA-Compatible header in the process, but some searching around StackOverflow found me the right combination of rules to remove ETags per performance best practice (see <a href="http://www.worldcat.org/title/high-performance-web-sites-essential-knowledge-for-frontend-engineers/oclc/144596256&referer=brief_results">Steve Souders' book</a>).</p>
<h3 id="gzip">GZIP</h3>
<p>I just copied this bit from the <a href="https://github.com/h5bp/server-configs">HTML5 Boilerplate Server Configs</a> & made sure it worked with YSlow & other external tests. It's super important to GZIP content, arguably the biggest performance win you can get, & yet that's not the default in IIS 7.</p>
<h3 id="security">Security</h3>
<p>I'm not an expert at hardening servers but it makes sense to eliminate headers that unnecessarily expose server information without any added benefit. I blank the <code>X-AspNet-Version</code>, <code>X-Powered-By</code>, & <code>Server</code> headers. Another IIS quirk is that you can't simply remove the <code>Server</code> header, all you can do is set its value to be an empty string which is at least enough to protect against the version number being exposed.</p>
<h3 id="rendering-engines">Rendering Engines</h3>
<p>Since the <code>X-UA-Compatible</code> meta tag doesn't really work, I send it as an HTTP header. This forces IE to use <a href="http://www.google.com/chromeframe?prefersystemlevel=true">Chrome Frame</a> if it's available or the latest rendering engine (e.g. no IE 8 using the IE 7 engine) if not.</p>Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0tag:blogger.com,1999:blog-5067904571139905755.post-38157656298640454982013-02-09T17:48:00.001-05:002013-02-09T17:48:31.640-05:00Eric Explains URLs (video)I'm teaching a course entitled "The Nature of Knowledge" and we're specifically focusing on what happens to knowledge in a digitized, networked environment. I gave the class a "technology inventory" survey to complete and the hardest question on it proved to be identifying the top-level domain of a given URL. As such, I made this video to explain URLs a little bit more in-depth.<br />
<br />
<div style="text-align: center;">
<iframe allowfullscreen="" frameborder="0" height="360" src="http://www.youtube-nocookie.com/embed/s9Ae1ZRqNLo?rel=0" width="640"></iframe></div>
<br />
<br />
<h3>
Weaknesses</h3>
I didn't do a particularly good job of explaining a few things in this video. I want to make it clear that it's not a flawless intro. Hopefully I can remake it sometime, but for now here are some caveats:<br />
<br />
<ul>
<li><b>What does a scheme mean?</b> I introduce two of them but don't describe their implications, i.e. that they're transfer protocols.</li>
<li><b>Subdomains</b> are basically everything in the domain that's not the TLD. I don't think that's clear from my example.</li>
<li><b>Search</b> can literally be a file, e.g. search.php, search.html, search.pdf (though that wouldn't have a query string). I know that the idea of URLs pointing to files is mostly an antiquated idea in the days of database-driven CMSs & web frameworks like Ruby on Rails. But it's a good starting point to learn more about them.</li>
<li><b>Google is a bad example</b>. I knew that but I didn't realize quite how poor, because Google doesn't use a ? to distinguish the query string, oddly enough, so a Google search actually contradicts how I'm describing a query string.</li>
</ul>
<br />
<div>
Anything I missed? Open to criticism but I hope this is a decent overview despite its flaws.</div>
<div>
<br /></div>
<div>
Also, I have a git repo of the site I made to demonstrate the different pieces, totally willing to share if someone wants it.</div>
Anonymoushttp://www.blogger.com/profile/13737038965630253900noreply@blogger.com0