Prospect research as an alternative career for archivists and librarians

This is the text of a presentation as written that I gave at the Fall meeting of MARAC in Roanoke, Va., on October 9, 2015. I adlibbed some, but it is pretty close.

My name is Ben Bromley and I am a development research analyst with Virginia Commonwealth University. While this may seem to be a left turn from traditional archival jobs, it is another job in the greater field of information management. The training that I received in library school and on the job as an archivist gave me the qualifications needed to get a job in prospect research and management.

Intro to University Development: We all know that universities and non-profits are reaching historic lows in funding from state and federal governments. Cutting public support has been the trend over the past decade, and there doesn’t seem to be much sign of it abating.

For these institutions to continue to provide the same level of services they have in the past (or, maybe some day, to start adding new positions and expanding services), private philanthropy is needed. Embedded in the world of private philanthropy is a job that often calls for applicants with library and information science degrees, and one that my archival training prepared me for very well: prospect research.

Prospect Research and Prospect management overview (history of?): The goal of prospect research is to identify new prospects who might be interested in donating to your organization and then finding the best match for their interests at your organization. In short, we try to find out the potential prospect’s affinity to our organization and then also need to estimate the capacity that they have to make a major gift.

The formalizing of prospect research as its own field is fairly recent, with APRA, the professional organization for prospect researchers, only being established less than 30 years ago.

There are many different places that a prospect researcher can work: many larger organizations, such as universities, will have prospect researchers on staff; some smaller organizations may rely on individual freelancers or consultants, and there are vendors who, in addition to other services, hire prospect researchers to work on their side.

The work is often collaborative, working not only with the other staff in your office but with the development officers and other University employees.

Research: The part of the job that will came most naturally to me and that will, I think come most naturally to most of you, was research. The most visible product of the prospect researcher is the research profile. Insteading of writing a biographical or historical note on a person or organization as part of a finding aid, we are writing a similar biography of a person or organization for the use of a development officer.

Just like archival description, the types of profiles that we write depend on the needs of the development officers. Some development officers will only want a brief profile, summarizing a person’s current job and how much we think they can give. Other development officers want as much information as we can possibly provide on a prospect so that they can be prepared for any eventuality. We also balance what a development officer thinks they need with the time we have available and the tasks we are doing for the rest of the institution.

The information that we typically provide is a summary of a person’s professional career, their personal relationships, and their non-profit giving and affiliations. And just like a bioghist note, not everything that we find is going to go into a profile. Even though we are doing research on people using publically available information, people still have a right to privacy; that right is compounded when we bring all of this information together from multiple sources into one document. We make sure to include what the development officer needs to know, and leave out other stuff.

One of the key differences between research and traditional archival description is right there in my job title: along with presenting the information, I also analyze the information and make conclusions based on my instinct and the available information. Factors such as the person’s relationship to the institution, previous history of acting philanthropically, interests and hobbies, and a myriad of other information is taken into consideration when evaluating a prospect. Just because they have plenty of money does not mean that they actually give money, or are actually willing to give money to your institution.

This analysis comes into play most when we are trying to estimate the giving capacity of a person. Obviously, we cannot see bank accounts, stock holdings, or anything like that. The resources that we use (and I will get to those in a minute) can only find publically available information. So, estimating someone’s giving capacity requires some investigation, some guesswork, and some analysis. We typically look at what the person’s current and previous jobs are, and how long they’ve been in the field to try and estimate what their current salary might be. We look at their real estate holdings, political and non-profit giving, and giving to our organization as well, and plug all of this information into a formula that is typically standard across the industry. And we take a look at their personal situation and family as well; even if two people look the same through the formula, someone with two young children is probably less likely to make a big gift than someone whose children are grown. That is the kind of analysis that we bring to the table.

Data integrity: Another one of our primary duties as researchers is making sure that the information found in our database of record is accurate and up-to-date. We are the custodians of the database, and the information in it is our responsibility.

We make sure that we have the most up-to-date contact information on entirety of people in our database so that we know that our communications are getting to them. Development officers have a limited amount of time, and can only spend one on one time with people likely to give larger gifts. However, universities still want to stay in touch with their entire alumni base, since any money given is sorely needed and people giving small amounts now may eventually give larger amounts in the future. While we are making sure we have their updated contact information, it is helping the university lay the groundwork for the next generation of donors.

On a more practical note, we also want to make sure that we know, for example, when people pass away so that their spouses do not continue to get mailings or emails.

Prospect management was the area that required the most learning for me as I made the transition from archival work to prospect research. Each development officer carries a portfolio of prospects with whom they are working to try and secure gifts to the institution. Some are proactive about adding and dropping prospects, but often they need our help in doing so. We want to make sure the development officers have the right prospects in their portfolio, so we help the identify new ones and help them make the decision to drop a prospect if they are not right for that department or for our institution as a whole.

Another part of prospect management is making sure that development officers document the contact that they have with donors. We want to make sure that we stay out of each other’s way and don’t bombard the prospect with conflicting information. Encouraging and reminding the development officers to document their interactions in the database of record helps prevent this from happening.

So what sources do we use to conduct prospect research? We use a mix of proprietary and freely available databases and sources of information to perform our job.

For biographical research I will use proprietary databases such as lexis/nexis, genealogical resources such as ancestry and familysearch, newspapers, court information, and social media accounts.

For all aspects of estimating someone’s giving capacity, I will use resources such as salary surveys, which are typically published by each industry; property assessment databases, zillow, SEC and FEC filings, and more.

The two hardest part of making the transition from archival work to prospect research are somewhat related: math, and the business and accounting terminology that typically surrounds it. Luckily, as archivists, we are naturally curious, so I now know things like what a charitable remainder unitrust is or how to read a 10-K filing submitted to the SEC.

Special Library Association specifically supports prospect researchers, and they have some good resources on their website. There are also local APRA chapters throughout the country, which have resources and conferences; APRA-Virginia, for example, typically has a one day conference twice a year that is affordable for people to go to. There are also blog posts and other resources online for other archivists and librarians going into prospect research work. I hope I have given you a good overview of how archivists are already qualified to be prospect researchers and feel free to ask me questions in the Q and A or come up afterwards and talk to me if you have any other questions.

Various methods of creating podcasts

Having been the producer of Filibuster – The Black and Red United Podcast, for over 70 episodes, I have gone through a number of methods of recording, most of which have annoyed me in one way or another. People keep asking me, though, how we do this podcast, so I figured I would lay out my tools and all the methods that I have used, and see if any of them work for you. I’ve broken this up into production, hosting, and recording.

Production

First, however, here is the production part which I have always used.

  • I edit the podcast in Audacity, which is a great audio editor that works on Linux, Mac, and Windows.
  • The effects that I typically use are Fade In/Fade Out (for the intro and outro music), Truncate Silence (to remove long pauses and dead air), Noise Removal (to get rid of background hums and hisses), Leveler (to pump out the volume of the outro after I fade it in and out), and Compressor (to even everything out).
  • When everything is done, I export to MP3. Make sure to check your export settings here, because there is never a need for an audio podcast to be 80 MB. I use Variable Bit Rate encoding at quality level 7, which usually gives me a 17-25 MB file, and it sounds perfectly fine.
  • Once the MP3 is exported, you are ready to upload it to your platform of choice.

Hosting

Proper hosting makes sure that you can get your episode off of your computer and into other people’s ears. Over the course of Filibuster, we have used a number of different methods of hosting, but let me just say this now: You don’t have to pay for it, if you’re willing to do a little work.

  • Buzzsprout was the first hosting platform we used, and it was your typical paid podcast hosting site, like libsyn and many others. Their cheapest real plan is $12 a month, so that’s not great.
  • Through SB Nation, we got a free Pro account with Soundcloud, which is where we currently host the podcast. You can apply for their beta podcasting platform, so that could be a good option for you as well.
  • YouTube is always an easy option, even if you’re not a video podcast, but would be difficult to get into iTunes and other podcast providers; not really recommended unless you are video.
  • The best free option, as long as you have a blog (and who doesn’t?), is to host your audio on the Internet Archive. You can embed it into your post, which I have done to the very first episode Filibuster. Placing a link (<a>) to the raw MP3 in the page as well will embed the file in your RSS feed, allowing it to be used by podcatchers. The feed for your podcast category is then your podcast feed.


Download the MP3 here

  • The catch with the Internet Archive is that they really want you to license it under a Creative Commons license. That’s fine for us, we already license it that way on Soundcloud, so IA would work for us too.
  • Some sites recommend running your podcast feed through Feedburner so that you can submit it to iTunes, but I have not confirmed those things.

Recording

This is the part that has caused me the most frustration, going back and forth between methods constantly. A note on recording: whenever possible, record to a lossless audio format such as WAV or FLAC; your final quality will be much better if you do so.

  • Before I took over the producer duties, we recorded via Skype with the producer using Garage Band to grab all of the audio and then do the editing. Update: Via the comments, LineIn and Soundflower were the programs used to record the podcast on a Mac.
  • When I took over, we continued to use Skype to talk to each other; however, since I use Linux, Garage Band was not an option. We started by using Skype Call Recorder, which updates only very rarely (last update in 2013) but still works. We would sometimes have problems with people dropping out, and the recording would get ended if that happened to me.
  • We then switched to Google+ Hangouts On Air for a long time. Despite the fact that it was “live” on YouTube, we only invited those who we wanted to be on the show. After the show ended, I would download the video file, use VLC to rip the audio out of the file, and then proceed to edit in Audacity. However, we had problems with audio clipping, people dropping out of the call, and people getting into the call in the first place. However, if people drop out the recording continues, and you can try and join the call back. This is a decent way to run a podcast, however.
  • Since I like seeing the faces of the people to which I am talking, we have now switched a regular Google video hangout, which I record using Audio Recorder, which records off of my microphone and my sound card; that means that we could switch between communication platforms and not change how we record. If you’re using Linux, I highly recommend this piece of software, which makes recording a podcast very easy.
  • I just use a Blue Snowball USB microphone, which is a massive improvement over the built-in microphone in my computer. If you’re going to do your own podcast, getting a decent microphone is well worth the investment.

That’s all I can think of; any questions?

Cross posted from http://beforeextratime.com/2015/03/various-methods-of-creating-podcasts/

On leaving (but not leaving) archives

For reasons that are boring to get into, I decided that I needed to leave the Library of Virginia a couple months ago and move into a new position that is not in an archival repository. I loved my time at LVA, and it saddened (and continues to sadden) me that I needed to leave, but such is life sometimes.

But I don’t want to leave the archives community, nor the archival profession overall. I still think that manuscripts, institutional archives, open government, and history are important in a modern society. So that is why I am starting RVA Research, in which I will perform independent research and archival consulting services throughout Central Virginia. This is my way of staying in the community which has shaped my professional career so far, a community which has given me many friends and has allowed me my career so far. So, if you have any research needs in Central Virginia (or know someone who does), I hope you’ll give me a chance to help!

Turn your Raspberry Pi into an easy VPN using OpenSSH

There are plenty of tutorials out there on how to turn your Raspberry Pi into a full-fledged VPN, often using OpenVPN. However, many people don’t need all of the features (and all of the setup) of a full VPN server in order to just protect their browsing when they are on unsecured wireless networks (or for whatever other reason you would want a VPN).

Enter OpenSSH. Most of you know it as the program that lets you login to other computers and servers through the command line; for a long time, that’s all I did with it too. However, you can use it as a simple VPN on your Raspberry Pi as well! First, go into your router’s settings and make sure that you have a port open through which your VPN will go; personally, I use port 3312. Then, run the following command on your Pi:

$ ssh -TCD:3312 username@localhost

Where username is the name of your user on the Pi; you will also be asked for that user’s password at the time. Then, all you need to do is configure your browser to use that to connect to the internet. Go to Preferences->Advanced->Network->Connections and click on Settings. Select “Manual Proxy Configuration” and enter your home’s public IP address as the “SOCKS Host.” Put in 3312, or whatever port you chose to use, in the port field and everything should work fine. When you end the ssh session, the proxy will stop working, so remember to switch your Firefox preferences back to “No proxy” or you won’t be able to browse. It is that simple!

MARAC Plenary thoughts: We will keep getting hit over the head with advocacy until we do it

During the MARAC plenary today, Kathleen Roe of the New York State Archives gave a rousing call for more advocacy by archivists for their profession to those external stakeholders who don’t necessarily know what we do but upon whom we rely for funding. It was an inspiring speech that had a number of us fired up to go back to our home institutions and to do more to advocate (with fire being a theme throughout her speech).

But if you look back over the past couple of years, plenary speakers have been exhorting us to get involved with advocacy left and right. Rand Jimerson challenged MARAC to be advocates for ourselves, our collections, and the stories of the subaltern found in those collections just three years ago. The year after, MARAC had an entire conference dedicated to advocacy where Christy Coleman charged us again to advocate to politicians, stakeholders, board members, and beyond. The Council of State Archivists has published “The Importance of State Archives,” a document that gives examples and talking points that can be used by many kinds of archival repositories. A google search for archival advocacy gives you thousands of hits, and shows that this is not just a new topic, but it is one that has been talked about more and more in the past few years.

Roe and others during her plenary talked about archivists needing to ask for what they are worth in salary, benefits, and from their schools, and rightly so. At the same time, archival institutions should ask for what they deserve from funding sources, politicians, other government agencies, and beyond. Too often archives are willing to accept the status quo or accept what their funding sources are willing to give them because they don’t think that they would be able to affect any change. Then, when situations like what happened to the Georgia State Archives occur or when massive cuts are on the table, our advocacy furor is shaken up and we finally start taking action. But when a situation is that dire, it is too late for grassroots campaigns to do much of anything. So please, conference program committees: continue to beat us over the head with advocating for ourselves, our institutions, and are usefulness until we start to get it and start to take it to heart. We need it, and won’t get there unless you continue to push us.

Does it matter if libraries and archives aren’t involved with open government data repositories?

Accessing information about government no longer has to mean going to a building and requesting permission to sift through paper documents. It doesn’t even have to mean writing a letter, filling out a complex form, or trying to figure out who to contact about public records or how to access records in the first place.

Technology has enabled faster, more efficient and more user-friendly access to government information — to public information — and governments across the country are increasingly embracing this opportunity in their policies and practices. One way to do this is to adopt open data policies that build on precedent set by existing policy like public records laws.

-From the Sunlight Foundation’s post “Open data is the next iteration of public records.”

Government transparency is a good thing, and so I am happy to see the proliferation of open government data repositories, like data.gov and its equivalents at the state level. One worry, however, that I have is that it does not seem that libraries and archives are involved in creating, managing, disseminating, or describing this information. Most of these projects come out of such locations as state’s IT agency, or the budget department, or some other administrative actor. I will admit that I have not looked at all of the open government data repositories out there, so there may be libraries and archives involved in some of them. But this is the kind of data that has traditionally been provided by libraries and archives, and now it is being taken out of our hands and being served elsewhere.

When I initially sat down to write this post, I was ready to be full of angst about the state of archives in our society, and that our roles were being taken over by IT departments and computers. But open data repositories generally deal with data that is machine-readable, and usually machine-created. On the other hand, we archivists have an explosion of digital data to deal with, the natural extension of the explosion of paper records during the middle of the 20th century; emails, word documents, excel spreadsheets, websites, blogs, Twitter accounts, Facebook, and more still defy easy manipulation by computers and easy aggregation into data repositories. One could argue that by taking the burden of providing these datasets that are easily described in the aggregate, archivists are thereby allowed to concentrate on records that cannot be described so easily or automatically. In addition, many of these datasets are provided in machine-readable format, which obviously leads itself to be described by machines rather than archivists. One of the things at which archivists excel at is taking a large mass of data that is difficult or impossible to describe automatically and providing access to it to researchers. But there is always an existential pang in my soul when a record previously provided by archives moves elsewhere.

So what do you think? Does it matter? Am I being an open government hipster, saying that libraries and archives were into open data before it was cool? Or should we cheer the move of these types of records to a different sector?

The Library of Virginia now has emails from Tim Kaine’s administration available online!

Since being hired by the Library of Virginia just over a year ago, a part of my job has been to process emails from the administration of Governor Tim Kaine, who was Governor of Virginia from 2006 to 2010. And last week, the first fruits of our labors were realized, with the release of over 66000 emails from Tim Kaine and his executive office. I was only a very small part of this overall effort, but I am proud to be a part of an organization that takes this kind of commitment to making digital public records accessible. You can find the whole collection on our Virginia Memory site; have a look!

Why I use free software

As everyone by now has heard, Google has decided to pull the plug on Google Reader, its RSS feed aggregator. Google’s reason is that Reader has a declining user base and that it wants to concentrate on other projects. I should have known that this was in the works every since they killed the social aspects of Reader and made its link harder to find. Unlike some other companies, Google is providing users with an easy way to extract their data and giving them three months before the service is shut down.

But let’s be clear: their motives are nothing if not in their own self-interest. It was one thing to allow Google Reader to continue to operate when, while it provided no benefit, there was no harm in allowing it to occupy a small section of Google’s massive server farms. But now the rumor is that Google will be launching its own competitor to Apple’s Newsstand, a place where people can pay for digital subscriptions to online content. Many of those content providers publish at least some of that content for free, often with RSS feeds; with Google axing Reader, it removes any perceived conflict that might prevent those content providers from joining up.

I tend to go back and forth on the scale of free software ideology vs. convenience, with this decision by Google propelling me firmly back towards the side of ideology. Projects like Firefox and Debian (both of which I am a proud user), run by non-profits that are big enough to compete with industry backed alternatives, are key to a healthy software ecosystem. Of course for-profit companies, like Canonical, Red Hat, and Google, are necessary and useful members of the free software community; however, having organizations dedicated to free software for its own sake helps get the ecosystem vibrant and alive. While I may not always agree with the goals and methods of the Free Software Foundation, I admire their dedication and think that that are a key member of the community. The ability to have control over your data, without needing to rely exclusively on a third party for its continued existence, is more important than ever in the era of cloud computing. (Its also the reason that I think libraries and archives should be even stronger supporters of free software than they already are, but that is a topic for a separate post.)

As of now, I’m trying out two different replacements for Google Reader: NewsBlur and tt-rss. Both are free software projects: NewsBlur is primarily a hosted service, while you can get the code from GitHub, and tt-rss is only available to those who can install it on a server. Both of these options also have Android apps, which is the other key requirement for me. Right now NewsBlur is still too slammed for me to give it a proper tryout, so tt-rss (installed on my raspberry pi!) is my main feed reader right now. If you are looking for an alternative to Google Reader, try one of those if you can.

Science Friday talks digital preservation

Everyone’s favorite Friday NPR show, Science Friday, came across a topic that hits near and dear to my heart: digital preservation! This past week they talked about a new method of long term preservation, which entails encoding the data onto DNA. Obviously, the practical use of this technology is still decades away, but this could solve the problem of media obsolescence.

AOI, now powered by Raspberry Pi!

Among Other Items, my humble blog, has been undergoing a bit of a change behind the scenes. Formerly, this site was powered by an old desktop computer, probably released in 2002, which had been converted into a Debian server. Now, this site is powered by a Raspberry Pi, a $35 computer which is running a variant of Debian. I can tell that the site is a little slower, but its worth it for the ability to experiment with this new little machine and because of the power savings it will reap.

If you want to learn Linux, I highly recommend getting a Raspberry Pi. Lifehacker has a lot of different recommendations for projects that you can try, and I will probably have some too if you are interested!