Embracing Failure

The silence on this blog is glaring. Crafting yet another introduction to yet another apology for silence fills me with a familiar feeling of fraudulence, a nagging insecurity about my repeated inability to embrace the constant grind that being a writer requires.

Yet I can embrace one thing: the knowledge that the root of all of this silence comes from my persistent willingness to value success over failure. That is, my silence roots not from an inability to speak, but an inability to accept the failure that inevitably comes from each and every attempt I will ever make to express myself wholly. Good ideas do not emerge intrinsically; they emerge from iterations and iterations of an idea, and it takes a lot of failure to get to a successful idea worth sharing. And while I could have been iterating on failures, I was instead, finding short-term successes in my career. Much of my daily job requires producing successes, which means that much of my work happens in the form of ephemeral face-to-face interactions or in short, satisfying e-mails. I am not shaming myself for engrossing fully in this work (it has been the right thing to do at this point in my life), but I also want to work on embracing more fully the work required to write.

That is, today, I want to embrace the work of failure. I’m starting this by writing what’s going to be something of a failure at a blog post.

Why am I already saying that this blog post is a failure? I’m saying it’s a failure because it does not adhere to any of the conventions of a blog post. Its title is not that catchy; it won’t get caught on any search engine optimization filters. The only way anyone will ever read or find this is if I share it through social media. Otherwise, it’ll just be a part of the block of Internet content noise. Fine. So be it. I’m embracing it.

It’s also a failure because it’s not going to compel you to do anything. It won’t offer you an action item; it’ll simply offer you with some understanding of how I’m feeling about my writing now and my identity as a “writer.” Maybe that’ll be worthwhile to you. But this post is still something of a blogging failure precisely because it doesn’t explicitly offer you much of anything. I’m aware of your needs as an audience, but I’m not really addressing them right now. I’m going to apologize – yes, sorry – but I know that apology probably seems a little insincere because I’m not really going to do anything to remedy this failure of a blog post. It’s going to live as a little failure.

I’m deliberately making failure to help myself embrace it. What I’ve learned from my various identity pivots over the years – thinking of myself as a creative writer to a journalist to a burgeoning professor to a “higher education professional” – is that boxing myself into successes and expecting myself to produce my best work possible at all times is not possible. My pivots have been possible because I’ve been willing to “fail” and, have, instead, turned my failures or falterings into futures. I’ve never always been entirely comfortable with that, but remaining flexible, trying new things, and discovering new strengths has, much to my surprise, made me pretty happy.

I’m going to fail a little more on this blog soon and make this my space to keep experimenting with ideas and allowing this to be the space where I see things that are working and not working. This will be the only one I’ll continue to learn and grow.


Dabbling in Text Visualization, Part 1

It’s no news that decision-making in academia is slow. Journals, conferences, edited collections, new haircuts – all of these things seem to take a while to happen in academic settings. So far, I’ve had the most experiences with waiting for conference acceptances (oh, and haircuts); I was shocked the first time I had to submit a proposal for a conference application nearly a year before the conference would actually happen.

The problem (problem?) is that I’m a bit of an opportunist when it comes to applying for things. So, I applied for a major conference in the rhetoric/composition community last year (read: CCCC) and got accepted! Hooray happy day!

But when that acceptance came in, it felt like – you know – it wouldn’t happen for a very long time. So, of course, that feeling that this very important thing is actually very far away was simply the beginning of a typical procrastination narrative: “Surely, I’ll have a much better idea of what exactly to do for this presentation if I wait, right?”


I mean, really, what was I thinking? Image courtesy of: HaHaStop.com.


Now, to be fair, I had done a little bit of work on this project for the UC Writing Conference, and Katie Arosteguy, a member from the panel I was on, put together a pretty sweet-looking Wix site for us to put up our contributions (i.e. I posted a PowerPoint with my presentation on it).

So, I had something to get me started, but the PowerPoint struck me as a bit anemic, even as I was presenting it.

A little bit of context: the presentation is trying to answer the question of whether students see the value in acquiring digital literacy skills, and whether these skills seem useful for them (from their perspective). I’m defining digital literacy skills as the ability to create a website (e.g. a WordPress page or a blog, not anything requiring coding knowledge), to read texts closely in virtual spaces (e.g. online, in PDF readers), and to navigate web-based research through library databases. I realize others have more nuanced definitions of what digital literacy means, but I developed mine based on the NCTE’s definition. Their definition is (rightfully, purposefully) broad, and I know that the skills I associate with “digital literacy” now will likely change over time.

OK, that said: after doing some interviews, organizing a focus group, and close reading some digital literacy narrative I ask them to write (more on that in a moment…), I’m finding that a lot of students are not really seeing the same importance of learning digital literacy as – well – many of their instructors are. In fact, the digital literacy narratives (yes, more on this in a moment, really) seem to reveal that a lot of students have (or are at least performing for the sake of assignment) a certain kind of shame about their use of digital devices to read, write, and communicate, calling their use of computers “addictive” and “unproductive.” Sure, activity like going on Facebook 24/7 is probably not the most productive use of time, but the kind of work they do on Facebook is often rhetorical and (seriously), many of them will probably need to navigate more social networks in the future to find jobs and network with people. 21st century stuff.

Now, I don’t want to assert that it’s a problem that students think/feel this way; I want to make some bigger claims about why they might be feeling this way. I’m not going to talk about those “why” claims here (perhaps they’ll appear in a post to come and/or I’ll post my presentation materials from CCCC here), but what I do want to write about here (and what has taken me a really long time to get to; sorry!) is how I want to represent these ideas.

Students from five different sections of freshman writing have to write a digital literacy narrative, and I wanted to see if the repeated tropes in the narratives I read in my section were similar to the ones in other sections. I really wanted to see whether there were any trends in the things those students were writing about.

So, I did something I had never done before: I entered the big bad world of data. I took an afternoon to mine a bunch of past UWP portfolios and put together a huge corpus of digital literacy narratives. How did I do that?

Why, through Voyant Tools!

voyant tools

Now, this tool is awesome. After entering in the URLs of a bunch of student portfolios, I was able to create an insta-corpus where I could look at lists of the most-often-repeating words and create visualizations of the data, like Word Clouds and Collocations. Once you enter in all of your data, your page looks something like this:

voyant tools2
I was looking at the patterns of a frequently used word, “obsession,” at the moment when I took this screen shot.

The most important thing I learned how to do while creating a textual visualization of my data was to use a “stop list.” This is a list of words that the corpus will ignore in its analysis, so that the analysis is not just spitting out data like, “Hey, look, the most frequently-used word in these narratives is ‘I’!” Isn’t that neat??”

Voyant Tools has its own stop list (in English and other languages), but I found myself adapting the stop list a lot, making sure that words appearing in WordPress templates (like the word, “WordPress”) were not analyzed. It was fun going through and pruning, making sure I could make as much sense of a large body of texts as possible (hey, is this what they all mean when they’re talking about Digital Humanities work? Side note for another time as well).

I’m really new to any kind of textual and linguistic analysis, so I’m sure there’s still a lot for me to learn, but I was surprised at how easy it was to find this tool and how simple it was to use. Check out how cool this word cloud is!


The collocation is actually even more interesting than this, but again, I think the analysis (and my impressions of how different doing analysis based on large bodies of text and visualizations) will have to wait for a Part 2 to this post…

“Just SHOW Me” Or How to Impress Your Boss in Industry

I am an emoticon abuser. Whenever the situation is appropriate for a smiley face, you bet that I’ll go ahead and include one.

This is not something of which I am particularly proud. Granted, the New York Times has justified the existence of our smiling keyboard friends, but this does not make me feel a whole lot better about the ways that I liberally sprinkle them through text messages, GChat instant messages, and even professional e-mails (though to be fair, I only felt free to do so when one of my professors opened the emoticon door first and typed one of his own to me). Emoticons are almost a compulsion for me; when I’m smiling, I want to SHOW the other person that I’m smiling; I want to transmit my smiles through cyberspace and I’ve somehow lazily relied upon a colon and a parenthesis mark to do the trick for me. Why do I need to bother to express my joy, my enthusiasm, or simply my light-hearted understanding  when I can simply excuse it away with a :)?

I want to believe that there’s something still irresistibly powerful about expressing something in language that simply cannot be expressed via an image. I want to believe that I could say whatever I want to say BETTER if I just used my words.

Yet I’ve come to discover that no matter how hard I want to believe that the word is the most powerful means of communicating, it may not necessarily be true in the digital age. This week at my internship, my supervisor asked me to draft up some proposal suggestions. My first suggestion was a long e-mail explaining how I would organize the information. My second suggestion was a series of PowerPoint slides illustrating the ways that I would organize the information. Guess which one he actually looked at and responded to?

In this situation, I can understand why the PowerPoint was more rhetorically effective; I’m re-organizing help information for an interface that is entirely contingent upon visual logic. Why should I write out my ideas in words when the information will eventually be presented entirely visually? The issue at the heart of the help documentation I am supposed to re-organize is, in fact, in its wordiness; apparently, customers are not reading the help information because they simply want to be shown how to use the software that the company has sold them. So, I probably should have assessed this rhetorical situation a little bit better at the start and just made the PowerPoints to begin with rather than spending the time drafting paragraph after paragraph of well-written ideas.

My boss actually spoke to this in a meeting that we had, too. We met in his office to discuss both the e-mail and the slides that I had sent him. Initially, I attempted to explain through my justification for fashioning the slides in the ways that I did, explaining the visual choices I had made and why I had organized the information in a particular way. At a moment of pause, my boss pointed his finger to his laptop and said:

“Look, with words, we’re always going to misinterpret each other. When I explain things to you, you probably won’t understand me and I won’t always understand you. So, just SHOW me what you have here.”

It was a striking statement. I don’t think he meant it in any aggressive way; he was just being honest. But I remember feeling vaguely hurt by the statement. DOES he always misunderstand me when I speak? I pride myself on clarity; I’d like to think that I am articulate. But perhaps I have too much pride in this respect for the PowerPoint certainly upstaged everything I said. Once he saw what I had in mind, he finally liked my idea enough to give me the feedback to continue to move forward with the project.

The thing is, I really did not enjoy mocking up those slides. It was a task with which I quickly grew impatient and distracted. Adjusting the heights of different boxes, determining what information went where on the page was a surprisingly taxing activity. But it was one that I had to do; I had to SHOW him what I had conceptualized and that required an awareness of spacing, formatting, and organization.

To be fair, I’ve never been particularly detail-oriented (case in point: when my sister and I used to cook together, I messily chopped the ingredients into pieces while she designed the beautiful platters. I was never trusted with design. I probably still shouldn’t be trusted with design). Even now, thinking about my goals for the summer and the forever-lingering goal to redesign this very page, I find myself hesitant to do the task; I simply don’t like the tedious work of figuring out the perfect colors for the appropriate boxes or the correct sizes and orientations of different objects on the page. It matters to me, but not so much that I am willing to invest my time in that way. I would rather read about design and think about the implications of design choices then to do the dirty work myself.

But you know what?

It is a good thing for me to feel uncomfortable. I have not felt this discomfited by my efforts in a long time. As I’ve mentioned in past entries, I’ve tended to pursue work at which I KNEW I would succeed (or at least I knew I would enjoy, which inevitably involved stories! Ideas! Words, mere words!). So, to see that a powerful industry like information technology makes decisions based primarily on graphics, tables, and icons is powerful; I see that I need to stretch my ways of thinking more, to be OK with feeling uncomfortable.

In the meantime, perhaps I should also forgive myself for the emoticon abuse. After all, if this is, indeed, a world invested in the logic of an image, maybe it’s OK that the simple warmth of a smiley face excuses my own inability to articulate what it is I want to express.

Life in the Cube

For the first time in my life, I have a punch card.

That’s right: my hours inside an office are tracked.

Punch in. Punch out. Present. Absent. Working. Not working.

Shifting from a life of complete flexibility and fluidity to one with rules and set hours is jarring. But this kind of experience – a life where work is at work and coming home means actually being at home and no longer thinking about work – is something I’ve always kind of longed to experience. It’s funny; there’s a part of me that had this glorified vision of what it would mean to work an office. I’ve perhaps seen one too many films where nicely-dressed women in crisply-pressed suits flounce into desk chairs, receive incredible praise for writing memos and reports, and then earning oodles of cash at the end of the day. I somehow imagined that I could be this kind of “career woman,” one with professionalism, grace, and intelligence!

Of course, I chose a life of academia, one where I don’t ever wear crisply-pressed suits (and if I did, I’d likely garner more than a few strange looks) and one where my professionalism is not reflected through the ways I interact with my co-workers, but through the intellectual labor that I produce. So, to have this opportunity to live another life, to be another “Jenae” who negotiates office politics, who sits at a cubicle, and who does work that is not concerned with literacy, literature, or abstract theories, is one that’s important for me (if for no other reason than to dispel myself of that office life myth).

As it turns out, working in an office is kind of like working anywhere else, except that you don’t get to see too much sunshine during the day (though I have scouted out a prime lunch spot overlooking a canyon). Oh, and you’re also in front of computers a lot. That’s hard. But my tolerance for screens has improved, so that’s a plus?

In spite of the fact that this internship is very much a way for me to do some career exploration, a week on this job has inevitably informed my academic interests. My mind can’t help but veer to digital literacy concerns!

Help documentation, as it turns out, is still something very much rooted in a logic of the print age: I spent two of my four days on the job simply combing through pre-existing help information in the form of “QuickStart” guides (which are basically step-by-step directions for how to complete certain functions within the software this company sells), “TechNotes” (which tend to give suggestions for “efficient workflow” processes using said software), and more traditional online “Help” supplements (remember Clippy? Like that, but not as invasive).

The company has tried implementing some Online Tutorials, too, which are Flash-powered slideshows with moving screenshots of different functions in the software, but even these cater to a logic that seems somehow incongruous with an experience working on a computer. All of these help guides suggest that there is one very particular way to go about completing certain tasks and using this software.

Now, again, as a newb on the job, perhaps I’m making a certain amount of unfair assumptions: indeed, it may be true that these kinds of linear, step-by-step manuals are the best way to teach people how to use software. However, given the fact that I’ve been so invested in pedagogy for the past… several years, I cannot help but scoff at the idea that this kind of passive learning could be effective.

Let me get this straight: the manuals are incredibly well-written and detailed. They contain so much valuable information for a new user. But is a user who relies upon this kind of help actually going to learn the ins and outs of the software? It seems to me that tinkering, toying, and getting your hands dirty in the process is the only way to truly – well – LEARN.

But how does one really learn tasks that are almost entirely reliant upon memorization and experience? After all, I’m used to helping people learn about writing, a nebulous process enveloped primarily in critical thinking and analytic skills. Using software like the one I’ve been learning does not require critical thinking per se; it just requires a little bit of logic (“So, when you press the ExamType button, you see codes for different exam types. Who knew?”) and some memorization.

I’ve been tasked with making a particular “modality” (i.e. mammography functions) within the software my company represents more “interactive.” I’m still trying to figure out exactly what that means (without suggesting the extreme intervention of a programmer to make me something awesome). Thus far, much of my time has been spent simply trying to use the pre-existing help myself to learn how to use this software. And you know what? I’ve actually found that a balance between the linear help and my non-linear playing has been the most useful for me. What has really helped me to learn this software is both reading, playing with the program, and re-purposing the information myself from taking notes to categorizing the software functions to imagining myself in different user roles using the program.

The only role I can’t seem to escape is one of a “digital native;” I’m unafraid to press buttons, to see what certain links do and do not do. I can imagine that many of the people using this software (i.e. radiologists transferring from print records to electronic) may not feel the same way. This, however, is the audience I have to remember as I consider re-purposing this work.

As I continue to punch in and punch out each day for the following five weeks, I’m hoping I’ll experience increasing clarity about how to best spend that time punched in, and keep myself even more “punched in” to thinking in an entirely new way.

Techno Logic

Scroll it, click it, surf it…

Sounds a little bit like the process I was working through this weekend.

I made the leap and purchased my own domain to download the WordPress software for my webtext project. I can completely understand why “novice” bloggers (like me) are drawn to using the WordPress software: it’s not dramatically different than using the WordPress blogging platform AND you have a lot more flexibility for tinkering. There are hundreds of free themes from which to choose for your blog and – bam! – it looks professional.

Granted, I’m all about learning to code and customize a webpage, but it is liberating to know that there are some ways to ease into the process (a la templates) without looking like a complete newb. I know that I would eventually like to personalize the code on the templates I’m using (after all, how else do I make them uniquely mine?), but for now, I’ve been having some fun just testing out different templates and seeing what works well.

I’m debating between two right now (as my “starter” templates before I tweak) for my final webtext:

Brunelleschi WordPress Theme
Pico Light WordPress Theme

They’re more similar than different. All I knew going into this was that I wanted a kind of “light” minimalist theme for this was a quality that all of my interviewing subjects professed as something they desired in their own web design. It seemed appropriate to mirror their consciousness of what is attractive and, indeed, for an academic project, I think it only makes sense to keep the design simple (thereby drawing primary attention to the content and showing to the reader that, “Yes, this is serious!”). I’m not sure anyone would read my work seriously if I applied, say, this kind of template to it:

Monster WordPress Theme (Totally Adorable, Not Appropriate for Scholarly Research)

My foray into theme shopping aside, I’ve been tinkering around with the two to which I’ve narrowed down (and I really wish I had taken some screen captures of my attempts to make the font size on Brunelesschi larger; what a disaster! My page looked like a cluttered mess).

At this point, I’m really feeling the Pico Light template. I like that the way the header/subpages are all in one block (rather than in separated chunks). It somehow seems to mimic the appearance of a “cover page” more. It also seems to me like the Pico Light template highlights the banner image more, which I think looks rather stark, sophisticated, and serious. All good qualities for an academic webtext!

I could see myself tweaking the font a little bit; the modern sans serif may just be a little too “cold” for my tastes (what can I say? I’m a sucker for a serif) and perhaps I’d try to expand the font sizes on the pages bar so that each of the page titles don’t look so squashed together on the left-hand size. Of course my experiment with Brunlesschi has scared me away from doing that a little bit, but part of tweaking code is persistence; every pixel counts.

Any thoughts, blog readers? Which one do you prefer?

A Humbling Weekend

Most digital natives have likely had an experience like I had this weekend: helping mom with the computer.

I’m not sure why she entrusts me with this task. I probably don’t know that much more than she does (though, of course, I have to adopt the bravado to act like I do). But our big task was to determine why her desktop computer was not recognizing a thumb drive.

My solutions to these sorts of problems tend to follow a simple, sequential sequence:

1. Mash the thumb drive into the USB port repeatedly until something new happens.

2. Try to run the E:/ as many times as possible and see if anything shows up.

3. Google the problem and see if someone smarter than me has a solution.

Alas, none of my typical problem-solving techniques proved successful. Eventually, we realized the ever-simple solution: turn off the computer, turn it back on again. Facepalm.

In any case, I suppose this rather minor technological gaff proved to me one thing: keep it simple and always have a back-up option.

I wish I had followed my own advice as I conducted interviews for my final project this weekend. I had some fantastic conversations with Eric and Brian this weekend (I learned some incredibly useful things about both of their writing/blogging processes), but all of my recording technology failed me. All. Of. It.

I spoke to both Eric and Brian over Skype and used CamStudio to record our conversations. Alas, during my conversation with Eric, my screen started flashing in all sorts of bright, photosensitive epilepsy-inducing colors. In a panic, I told Eric that I had to shut down my computer and we ended up resuming our conversation over the phone. Fortunately, with a computer reboot, my precious laptop was well and good, but I was far too terrified to reboot CamStudio again. I tried screencasting parts of our conversation (as Eric was generous enough to share his desktop screen with me and walk me through his design/marketing processes on his blog). Still no luck.

After I got off the phone with Eric, I felt utterly defeated and incompetent. Wasn’t this supposed to be easy to do? Wasn’t I supposed to have easy solutions to troubleshooting problems like this?

Before I spoke with Brian, I did a few test runs of CamStudio and saw what I had been doing wrong with Eric (I had not adjusted the settings appropriately; go figure). So, Brian and I had a nice, hour-long conversation, CamStudio chugging away in the background recording.

Hooray! This is working! I’ll have all of this fantastic data! 

So, my enormous video file with Brian saved successfully, but now? I can’t seem to open it any of my media players. I keep getting error messages with every media player I attempt to use. I even attempted to open the file in a web browser. No dice.

In short: I’m frustrated. It’s even a little ironic perhaps that I’m writing a project on the relationship between functional and rhetorical literacy and I can’t even master functional literacy for myself!

I briefly whined about this to Mary and Aaron and they both encouragingly suggested that my struggles could, in fact, enhance my project. To share that I was learning as I’ve been going along is  a helpful admission of my own process in developing greater technological literacy. So, I’m grateful for those reflections from them and I’m not willing to entirely give up hope on Skype recording software. I have one interview left to go and I don’t think I’m really willing to risk losing more footage via CamStudio. Research time!

Code Year, Lesson 2 Part 1

I have a distinct memory of the first day of French 1 in college. I walked in confident that I would succeed. After all, I had been a successful Spanish student in high school; French was a romance language, ergo it wouldn’t be that different. Right?

Well, I quickly realized that in college language classes, you actually learn the language. My instructor, Alison, opened the first day of class speaking to us – a group that presumably had no prior French knowledge –  primarily in French. It was a clear dive into the deep end. My, how this was different than high school Spanish class!

(Side note: I had pretty strong Spanish education in high school, but we spent quite a bit of class time gathered in a circle singing songs by Juanes and Rebelde. So, you know. Not super rigorous.)

Anyway, this is perhaps a circuitous way of getting into my main point about programming, which is that I feel like with Code Year, I’ve similarly dived into the deep end. Unlike with French, however, I’m fighting not only the battle to learn quickly, but also the battle to overcome my anxieties about technology. I’ll admit that I’m a computer user more inclined to call tech support when something goes wrong than to take the time to trouble shoot myself. I figure that “an expert” must know more than me, right?

The glory of the digital age (And yes, I realize this is hyperbolic. Humor me) is that we CAN have control over our digital lives. I may not be “math-brained,” but I CAN learn to program. I have faith in this. It is just a doggone, difficult task.

Before I get into the nitty-gritty, I have to say that part of what inspired this “can-do” attitude was a short video I watched earlier today from educator Stephen Chew aimed towards helping undergraduate students develop better study habits (i.e. DON’T MULTITASK; NO ONE DOES IT WELL). The video itself is admittedly a little cheesy, but he stated something in his video that struck a chord with me: many students don’t succeed in his classes because they assert they’re “just not good” at something.

This is something I’ve told myself so many times over: “I’m just not good at math.” “I just can’t visualize [insert any image/shape here].”

“I just can’t do it.”

There is some truth to the concern that some students can’t learn as quickly as others can in different subjects. But I’m coming to believe (increasingly) that we really ARE capable of doing anything if we just take the time. It’s doing things that feel uncomfortable to us that is the real push. I don’t think it’s much of a generalization to say that we’re all predisposed to avoiding uncomfortable feelings; that feeling of failure and incompetence is perhaps one of the most uncomfortable for me (this is, unfortunately, a common syndrome of living a life primarily validated by academic achievement).

Phew, with ALL OF THAT SAID:

I’m pretty frustrated by the programming. I can’t give up now because I know how important it is for me to learn. But dang it. It’s really hard for me.

For those who care about the technical stuff:

I completed the lesson Functions in Javascript this week. From what I understand, functions are basically blocks of “reusable code.” It’s a code that you store that gives your program a certain command to repeat over and over again (so that you don’t have to write in the same commands repeatedly).

See, so far, so good. I’m all into this efficiency thing. I like to color coordinate files and write to-do lists, so the idea of something like a function very much speaks to my organizing soul.

As these Code Year lessons tend to go, it started quite well. I must say that I felt incredibly proud of myself when I created my very first function!

"You never forget your first function"

Simple, yes, but still mine all mine! I made the console spit out my name! Winning!

Of course, the lesson grew increasingly complicated. As with learning any foreign language, learning a programming language requires you to integrate what you learned from lessons prior (crazy, right?)

So, within functions you must also store variables. Defining and storing variables allows you to work with the numbers/words/items that will be important for your function (and therefore your program) to – well – function!

So, I didn’t capture an image that spit out the console log text, but basically this activity required me to run the code to see what the function “greet” spit into the console log. As you can see, because the variable “greeting” is was typed into the console log, the variable’s text (“Ahoy”) appeared in the console log.

So far, so good? Yep, so was I.

Here’s where things get tricky:

Inside a function, certain values can get “returned.” To clarify, in all of the examples I’ve shown, the functions are just outputting information without receiving any input. The return tool is necessary to use if a particular value is inputted into a function.

Here’s an example:

The value “x” has been inputted into the function and two different functions are created: one that simple returns the value inputted and another returns the value inputted, squared.

Again, I think we’re still following, right?

Remember those tricky if/while loops I discussed in Lesson 1? They’re baaaaack:

So, here, we’re basically just complicating the function further: this program will determine whether to spit out “The statement is true” or “isn’t true” based on the conditions established within the function.

What was tricky to me about this was figuring out how to define the conditions within the function. How do I use the correct syntax to break down the conditions? What’s the clearest/easiest way to do that?

Obviously, I’m still learning and I’m still working through a lesson on establishing parameters within functions, too. But… here’s where I left off!

I find myself reading and re-reading directions to make sure I understood, but now that I’m synthesizing what I’ve learned, it feels simpler than when I was learning it. I guess this shows I understand what I’m doing? Maybe?

Code Year: Lesson 1

I’ve made it through (most of) Lesson1 of my “Getting Started with Programming” Course on Codeacademy and I’m already running into trouble! Nooo!

Mind you, the first few exercises within the lesson felt like a breeze. Really, I figured I’d be making apps and creating programs in no time! I suppose when you have exercises like “type your name”…

…And “Copy these instructions so that you can create one of those awesome pop-up boxes!”

… you’re bound to feel like a programming champ.

Yet this were just the “feel good” exercises before tackling the real deal.

Before I get into the details, here are some new terms I learned:

string:  a unit of alphabetic characters that need to be identified as one. As a programmer, you need to identify “strings” in your programs in order for words to show up. If you don’t identify “strings,” the program won’t recognize those particular word units as important. You have to treat your program a little bit like your great aunt Ethel; assume that it will remember nothing. However, it’s easy enough to store strings so that the program will remember what you mean when you type in certain words.

variable:  a number that you store for the program to use as information later.

array: an index (typically of numbers) created so that the program can store multiple numbers at once in the same order/pattern each time.

I’m still not sure I completely understand the distinction between a “string” and a “variable” (aren’t they all simply units of different types to be stored?), but the above definitions are a simplification of how I’ve parsed this out in my mind.

In any case, as I moved along through the lessons, I discovered that programming language involves a lot of syntactical precision. After all, a program can only run if the writer is incredibly explicit in his/her directions. Let me try to provide an example:

The exercise wanted me to command the program to parse out two letters from the string “hello” (in this case, the letters in the first and second positions of the string). Alas, do you see what mistake I made the first time around?

That’s right: I put a period at the end of the command while I should have left it open. That one little dot messed up the entire operation! Isn’t that bananas?

Of course, syntactical accuracy is not the only thing I learned to be wary of. Logic is a hugely central to programming, something I had never realized before I began Code Year.

This makes complete sense, of course. How would a program know how to function if it did not account for all possible contingencies?

One concept I especially struggled with was that an “equal” sign does not always mean “equal.”

I mean, what? Equal is not equal? What are you trying to do to me, Javascript? Are you going to tell me next that people aren’t people?

So, as it turns out, one equals sign simply identifies a variable. It does not necessarily mean that the variable at the end of the “equal” sign is going to be the “answer” to the program (especially if it’s a mathematical program). Three equal signs together (===) is used to check if one variable’s value is equal to another variable’s value in the program.

Oof, yeah, I had to read that a few times over, too. My brain is not used to considering all of these different mathematical contingencies!

The primary circumstance in which a programmer may use the “=” and “===” signs is for “if” statements. Here’s an example:

Do you see what’s going on here? Basically, this is a program that will spit out one of the console logs (i.e. “Your name is Sam” or “Your name isn’t Sam”) depending on whether the user of the program answered the prompt (“what’s your name?”) with – well – Sam or not Sam. The “if” statement (with the three equals signs determines) what the response for the user will be if “Sam” is entered or not entered.

See, it really makes perfect sense. It’s just a matter of completely rethinking syntax.

In any case, I felt pretty confident in my understanding of how the “if” and “else” statements work in a program. What’s (still) tripping me up are the “while” statements.

A “while” statement should – in theory – set up a continual loop within the program. However, I seem unable to nail the syntax to create these while statements successfully and I don’t quite know what to do. Any thoughts, cyberspace? Here’s the problem I’m running into:

Perhaps it’s difficult to read the instructions (they are in tiny-ish script), but I’m supposed to create a program that prints the word “hello” twice using the “while” loop. However, I’m not sure where the variable “i” fits into that and how I help the program to understand that I’m not using any numbers here. I’m afraid I’m a bit stuck! Perhaps some future Googling will help me to solve this program.

In any case, I’m really trying not to let myself get frustrated. Even in this first lesson, I found myself reading and re-reading instructions for even the most basic of “activities.”

Perhaps I’m simply not acclimating to thinking from “the back end” like this. Indeed, I fancy myself somewhat creative, yet from what I can gather, successful programs are not merely creative: they are able to envision whole words and define the terms that creates those worlds.

That’s mind-blowing to me.

I suppose for now I’ll have to be content with the mere shrubs and brushes I’m starting to program before I can think about the tall buildings and – goodness knows – the people to populate this digital world.

Sails Up!


My name is Jenae. This is what I look like:

High Quality Laptop Camera FTW

I’m a first-year PhD student in the English Department at UC Davis and I’m going to do something I’ve never done before: I’m going to learn to code in JavaScript.

Perhaps the title of this blog is misleading: I’m not a luddite. In fact, I was born in 1988, which makes me a “digital native,” right?

Indeed, when I buy a new electronic product, I don’t read a users’ manual. I tend to dive into new products, darting between screens, pressing different buttons, and simply seeing what works and what doesn’t. Only on computers do I approach the unknown with such brazen disregard for rules.

Yet computers are governed by rules. Scripts upon scripts dictate what we see on our screens. There’s a whole language that determines why our keystrokes appear the way they do.

For a long time, I’ve been content with ignoring this “man” behind the curtain, the words, letters, and symbols that do the work of my computer before me. As a humanities student, I am often firmly content with delegating technical understanding of the world to skilled engineers. How does that stove turn on? Magic! How do magnets work? It’s a miracle! Why do objects fall to the ground? Um, gravity?

But understanding how a program works has some very tangible relevance for me. If you’re reading this, you know that writing is changing. It’s no surprise that digital communication is the primary means by which we exchange information in the 21st century and, for better or for worse, fellow luddites, that’s the way it’s going to be. I love a dusty, old book as much as the next English student, but goodness knows I’m not going to resist this pressing change in our literacy practices.

I’ve tried this once before. This past summer, I attempted to teach myself Python, partially for the professional value and partially because mentioning my interest in programming scored me a lot of messages on the online dating site, OKCupid (SIDE NOTE: it really is incredible how many men are interested in a woman who knows a technical skill. This, in and of itself, is worthy of some sociologist’s dissertation work).

I perhaps should also mention that I’ve felt hugely hypocritical for remaining absent from the blogosphere. After all, as part of my English degree program, I am pursuing what is known as a “designated emphasis” in Writing, Rhetoric, and Composition Studies (WRaCS) and plan to specifically focus my (eventual) dissertation research on digital writing practices in some way. In what way that will be, I’m not entirely sure.  All I know is that our digital literacy practices have huge implications for our future study of literature and writing and I’m not about to miss out on that.

With that said, this blog will chronicle my process of becoming less of a technological noob and more of a technological neophyte. There will be some musings here about my own relationship with digital literacy, cool links/websites I find related to digital literacy and practices, and perhaps mostly my journey in learning Javascript.

I am currently working through Codeacademy‘s Code Year project to learn Javascript. Each week, Codeacademy will send me a new lesson and I will complete it within the week.  Some weeks I may look like this:

This was captured candidly when I was trying to figure out my webcam...

Other weeks, I may look more like this:


But by making my thoughts public, I will hopefully be held accountable to push on through and avoid looking too much like this:

In the meantime, I’ll be balancing graduate school coursework, but will likely tinker with some web design as well, not only to personalize this page and make it beautiful, but also to become a more apt, savvy, and comfortable tech communicator.

See you in cyberspace!