Tuesday, November 16, 2010

Unit 12

Using a pre-installed VM presents different challenges than creating one directly around the planned digital collections. The biggest factor for which would be preferred would be the culture of knowledge fostered and the support (or lack thereof) from administration. I believe building one would be preferable in terms of a the robustness of the repository. However, if there is not a shared sense of commitment and understanding between shareholders, particularly those that decide policy and funding, it may prove to be too big a bite to try. In particular, preservation issues must be thoroughly addressed and I can see the logic of using a pre-configured set-up if doubts for long-term funding were present. It is hard to say in some ways because I am learning this in an isolated environment in a way and in the real-world more people and opinions would impact things a great deal. As far as pre-configuration helping to give more time to devote directly to the collection is a valid argument but the counter would be that pre-planning should/could do the same on a self-generated system.

Tuesday, November 9, 2010

unit 11

Perhaps because it is the freshest memory or maybe I have had more experience with evaluating websites in this field, but I was impressed with the Omeka site. In part I felt it was nicely geared for the beginner with both screencast and text based documentation. There was a logical set-up to the site and I felt the general overview was complete. One negative was that the FAQ page was deleted and had not been replaced for over a year. I particularly liked the Use Cases forum, in which developers explained how they used Omeka in different real life institutions. This gave me both insight into how to use Omeka but also a more general sense of what other types of people want to develop digital collections beyond libraries, archives and museums.

Drupal is aesthetically a little fussy for me but the documentation was significantly more detailed on the technical side than Omeka. It is also clearly much more popular and has been around longer which means its background information and forums have tackled more problems and offer more solutions. Drupal and Dspace home sites have a lot in common in terms of numerous links, depth of documentation and general sense of being overly full. An example from Dspace is the feature of linking ‘child pages’ to each main section. There are reasons why this would be helpful to follow a topic throughout the website but it adds on numerous links to the page that is unnecessary/confusing for the general user.

Jhove strikes a balance between the bustling atmosphere of Drupal and Dspace compared to the cleaner Omeka. It is clearly focused for IT staff. A concern is that some of the information is a little old. An example being that the news link has only two links, both from 2008. Has nothing happened in two years or is no one managing the website? Neither inspire a sense of confidence.

The OAI-PMH main site is deep and shows that the project has history and is a major international endeavor. It is a little impersonal and expects a certain amount of previous knowledge from its users on things like acronymns. My experience was positive with the install process but the website itself is a little intimidating.

One of the key features of all these systems is that they are open source based. To me this means the sense of community, communication and forum options, documentation and current news would be very important considerations in the choosing process. I like Omeka but the appeal of Dspace or Drupal is the large and active community of users that could provide support for free. It is a balancing act and I would give a lot of consideration to future support before making commitments.

Sunday, November 7, 2010

Unit 10

It is an interesting question to determine how successful a service provider of harvested metadata is as we are entirely dependent on their efforts for our results. Without the service providers, the information does not get found easily. Which makes it curious to me that the service providers I was able to examine were rather a mix of strange bedfellows.

Ex. 1. http://www.perseus.tufts.edu/hopper/search?redirect=true

The collections that provide the sources are disparate to say the least. The dominant collection is on the Art and Artifacts of Greek and RomanMaterials, while the second largest contributor is a collection of 19th century American history, including the digital archive of the Richmond Times Dispatch newspaper. Subject, time period, place, etc. are not held in common and when searching the two faces of the collection are very apparent. What I am assuming is that it is a collection that the host, Tufts University, is finding this a convenience for its own reasons but it is not logical combination for the general user.

Ex. 2 http://re.cs.uct.ac.za/

This was a different style of searching than the norm. What the site does (from its frankly hideous looking interface) is allows you to set metadata parameters that you can then apply to the OAI compliant providers listed. There is not a way to search by key words and it requires knowledge of how OAI harvesting works to make sense. Again the collection of providers is from all sorts of institutions, around the world and with little obviously in common. It is kind of an inside out search tool. Another confusing point, the Open Archives list shows Virginia Tech as the host, but the site itself is from the University of Cape Town. Bit of a difference between the two!

Ex. 3 http://hispana.mcu.es/es/estaticos/contenido.cmd?pagina=estaticos/presentacion

By far the most successful of the examples was the Hispana site in terms of relevancy of search results. In large part this is due to the fact that it is a dual project with one side a directory of digital projects from Spain and the other a harvester for those same projects. This two in one approach meant that searches for common terms gave relevant results. The limitation is that it is all related to Spain. However, I would prefer to go to more than one service provider and get relevant results if they are both like the Hispana site rather than go to the Perseus site and have an unusable mix.

Tuesday, October 26, 2010

Week 9

Ah, the metadata dilemma. Too much is too expensive while too little makes the whole project pointless since no one will find it. For digital libraries in general I think this is an area that is both art and science. For my digital collection in particular, the potential audience and known creators are who I have in mind when I am experimenting with cataloging. What I mean by that is I envision my collection of webcomics as being of popular culture interest and not for a specialized field or academia. For general users, who are comfortable online, traditional subject terms are not adequate as they can be old fashioned or non-intuitive. Because of that I have been playing mainly with key words and tags. Neither are perfect. Key words have potential as being natural language based and a well known search method. I think it is the system most users will be comfortable with. I personally like tagging as method of description but without high user involvement and/or collection density it doesn’t necessarily work well. In both cases, consistency is dependent on me (the administrator) paying attention and keeping track of what I had chosen in previous cases. The only way around this problem I can think of is to include decision making for terminology in the planning stages. And then hope one is prescient enough to cast the net wide enough to give full coverage.

Tuesday, October 19, 2010

Week8

While I have become quite cavalier about setting up a new VM, actually installing Eprints was more anxious. Each time we install a new system, there are new areas of confusion or grey spots in my previous understanding. I am however, taking the lesson from IRLS 673 to heart. At the beginning of that class command line work seemed beyond my understanding. And while I don't claim to understand it completely now, I have improved by magnitudes. I am believing the same will happen with time spent on Drupal, Dspace and now Eprints. Comparing the installation experiences between the three is apples to oranges to kiwi. I currently favor Drupal for my semester project in part because I was most comfortable with that installation. Dspace was a horrible experience, though to be fair most problems were externally generated. Eprints has been in between. The main issue with the install has been that it has taken more time, even though there have not been major problems. The aptitude upgrade and then the Eprints apt method both took a considerable amount of time for no reason I could deduce. It makes me fearful over my laptop and if the hardware doesn't work then that is a very big problem for me. As far as the configuration/branding exercise, when compared to the customization processes for Drupal and Dspace, I again would place it somewhere between the two. Drupal continues to be on top and Dspace in third. In non technical terms I am not feeling Eprints yet and part of it is that I don't have an example library that uses Eprints that I have really liked. The listing of the roar site of repositories using Eprints was large and the examples I looked at were interesting but nothing that inspired me. I have not spent as much time with Eprints as I have with the other two so I will be curious if my opinion changes at the end of these two weeks.

Thursday, October 14, 2010

Week 7

I have had a week of discouragements in regards to DSpace installation as my late posts can attest. My beloved laptop has had sudden battery problems . The phone company has done a lot of repair work in my neighborhood after a big storm that has meant random internet blackout periods. And I am missing something about DSpace. Intellectually I understand the hierarchical nature of the DSpace setup but it is not a natural fit. When I try to apply those principles of organization to my digital collection it does not work well. I struggled to pick a collection at the beginning of the semester and still feel that it is not well formed. My conceptualization is not very firm as I don’t have any practical experience to draw off of. In previous posts I have been quite critical of dull or not very relevant collections being chosen for digital projects by institutions as being a cop-out. I still think that but I have new appreciation for the difficulties inherent in the process.

On more positive notes, the readings for Unit 7 were both very interesting. I appreciate the perspective the Stanford authors laid out as to the successes and failures of a major digital repository. It gave a new sense of the speed of change the digital reservation community is experiencing. The Johns reading about the context of repository software design gave insight into the root causes of differences between systems. The Greenstone open-source system is one I particularly find interesting because of its focus on multilingualism. The New Zealand Digital Library Project and the University of Waikato developed the project and a partnership with UNESCO has helped make it an international community. A phrase from the website has particular resonance regarding increasing the “awareness of the social implications of information technology”. This is brought home by the use of Greenstone for bilingual digital collections for minority languages at risk of extinction, as the New Zealand project has included Maori, there are others in Welsh, Kazakh, Hawaiian and more. It pleases me to see a digital library have two preservation roles to play. The Greenstone project also focuses on its interoperability with OAI-PMH and METS. It is also able to import and export collections with DSpace. How that works is something I will be interested in exploring further.

Tuesday, October 5, 2010

week 6

I had an unnervingly easy install of DSpace. I did not understand all the steps in the process but I did for a fair amount that means I would feel confident contributing to a discussion about DSpace but would not make decisions without a systems librarian. What is funny is that I had difficulties downloading the WRF for the tutorials that I am still trying to sort out. I was able to start the set up of the site as administrator and uploaded a collection piece but I will want to spend a lot more time experimenting before I make any final decisions about it being the best for my collection project.

Wednesday, September 29, 2010

Week 5

I found myself browsing through the list of module downloads for Drupal for a long time. Kind of like Wikipedia, where one item leads to another to another to another. Many modules offered tools I didn’t know about or understand. Many were named in ways that didn’t make its purpose clear (personal favorites include Bad Judgment, Bespin (planet home of Lando Calrissian’s Cloud City from the Empire Strikes Back) and Awesome Relationships. One way I thought would help me find a useful module was to think in terms of the collection content, specifically the webcomic format. Two modules looked promising, Webcomic and Comic, but neither were supported in the current Drupal version and so could be downloaded but not enabled. I was able to find a module that supports a type of animation process called page-flipping that was compatible. But I don’t have a collection example to test it on so it is only theoretically helpful at this point. Another approach was to think of features I use consistently in my own life, which led me to the print-friendly module. The install was successful even if the purpose is a little dull. However, it is an option I use often online. Another idea I thought would be helpful was a module for Site Documentation. Ironically, the documentation for that module was incomplete and so it seemed suspect. One tool I looked for but could not find was for a random button, a common feature on most webcomic sites. I went about it as logically as I could but I did not find anything by name or keyword. While it could certainly be that I misunderstood the description of the correct module, a possibility being Random Viewer, but I honestly don’t think so. Which made the inability to search the (thousands) of modules disappointing. However, given the active support community I feel confident that someone will point me in the right direction. All in all, Drupal was a success.

Week 4

The positives for Drupal content management of the webcomic collection I am developing is based on two points. 1.) the levels of permissions the role function allows gives a lot of control over the collection and 2) it is scalable to size and purpose of the community of users. As I envision this collection as a test for a more informal community based project and not one that is necessarily embedded or part of the workflow of large scale projects amongst professional information managers. For the amateur enthusisiasts as it were, interested in discoverability and preservation. Or the smallish public library, adding patron created content for fun. In which case a distinct advantage of Drupal is that it allows different users different access levels in a flexible way. The pyramid shape of access from the many anonymous users at the bottom to the (very) few administrators at the top give real control over managing the content input and searchability. The negative side of that coin is that the administrators truly need to know what they are doing and the initial setup needs to be very good or problems will appear quickly and escalate. While Drupal offers good documentation and support the administrator would need to be an expert. Since the webcomic community is, in general, not financially stable, this would most likely be volunteer labor which carries its own concerns. This is why the scalability of Drupal is useful, as community created resources often have a growth period. One success criteria element I am curious about is the best methods of communication for such a project. In the open source world a forum will grow to dominate the community and form a common area that is a successful way to ensure communication occurs in a timely manner. For the webcomic community, there is not currently a dominant website that fulfills that function. If this collection project were to be a success it may evolve into that dominant site and it at this point it is not clear to me how well Drupal will do with forum tasks. Other criteria points to investigate would be security, preservation support , and methods for avoiding redundancies. I have no doubt more criteria points will develop with further experience.

Monday, September 20, 2010

Week 3

I am ashamed about how late I am finally finishing week 3 assignments. In my defense, it has been a truly awful week with my sister in hospital but her new baby at home and both needing to be cared for. So I am a poor example of gauging the tech activity levels as I did it in starts and stops and did not complete on time. If it matters, I have found the tech activities to be well described and usually when I run into problems someone has already worked it on the discussion board. What I am needing to do is really delve into the node concept of drupal and get a good handle on how that works because without that basic understanding I will make mistakes further down the line. I think the ease of use of drupal may be lulling me into a false sense of security that I know how this works because of its ease of use. I will be on my guard. Or at least awake.

Tuesday, September 7, 2010

Week 2 - Library Hi Tech article summary & review

The article I chose to review from Library Hi Tech was "CMS/CMS: content management system/change management strategies" by Susan Goodwin, Nancy Burford, Martha Bedard, Esther Carrigan and Gale C. Hannigan

In summary:

The Problem – the five libraries that serve the 45,000 users at Texas A&M web presences were inconsistent, difficult to navigate, decentralized and underutilized.

The Action – A Web Integration Team was formed and given the responsibility to develop the Universities Libraries’ web presence. In an effort to be inclusive and promote staff involvement, team members were chosen who were in a position to be ‘agents of change.’

The Goal – to chose and implement a content management system that would provide consistent, integrated and user-friendly web presence.

The Results – the WIT offers lessons learned and recommendations for the process of choosing a CMS.

Lesson 1.

Systems people should not be the only ones tasked with content management. Recommendation: Content management goes beyond technical requirements and demands the attention of upper management, subject specialists, collection developers and other library departments. Most important is to have a way for the decisions of the group to be implemented by people in positions of authority.

Lesson 2.

Developing an integrated approach to web presence revealed lack of organization unity. Recommendation: Use this time as an opportunity to examine old hierarchies and concepts that may be holding back knowledge sharing.

Lesson 3.

Creating an integrated web emphasized the need for a new work culture of collaboration between libraries. Recommendation: Purposefully promote knowledge sharing and a ‘big picture’ perspective amongst staff and other stakeholders.

Lesson 4.

Details can derail the process. Recommendation: it is better to view development as an iterative process of continuous improvements. Make the logical decision for the current environment and plan to revisit in the future.

Lesson 5.

Focusing on electronic resources highlights the paradigm shift for libraries to move from internal organizations focus to user based focus. Recommendation: Acknowledge that internal staff are users also. Do not downplay the impact of this change.

Lesson 6.

Roles and responsibilities will change. Recommendation: rethink job descriptions, redesign hiring requirements, reassign and train people to meet the new needs of the library.

Lesson 7.

Developing a website creates change that must be managed. Recommendation: acknowledge that it is a process that requires advanced management and leadership skills.

Lesson 8.

Communicate. Recommendation: establish regular methods of communication at all levels of staff and solicit feedback for ‘reality checks’.

Lesson 9.

The work load will not decrease over time. Recommendation: Without long term commitment, content management system software is wasted.

While trying to recap the article, I found that the lessons offer valuable insights. I think the authors are correct in calling it a paradigm shift that will create opportunities and obstacles from a management perspective. It is also key to acknowledge that staff involvement will have its ups and downs. There is a noticeable lack of faculty involvement described which may lead to major problems for the library if outside stakeholders were not included in the process. I found the changes to the organizational chart as well as a sample of the content management deadlines to be helpful as an indicator of the level of change and the amount of work the process entails. The final analysis of the authors was that while choosing the software for CMS was a major accomplishment, more importantly the development of a significant web presence heralded major changes for the library culture and its staff that required delicate and affirmative management. All in all, I appreciated the common-sense reflection and resolutions the authors present from their experiences.

Tuesday, August 31, 2010

675 week 1

For the first week in the IRLS 675 class, the assignment is to collect 25-30 digital objects that will be used to experiment with ways to organize digital collections. Seemingly simple task. And I am flummoxed. It is a case of not being able to see the trees for the forest. As someone with a computer and a connection to the Internet, I must have loads of digital objects but I am not finding a cohesive theme besides me finding the content useful/funny/beautiful/necessary/etc. I have pulled together the necessary number of items but I am not happy with it. And a surprising amount of items I find reasonable to share are ephemera or silliness. Am frustrated and think I need to go back to the drawing board for this one. Luckily all the installations went well so there is that positive for the week.

Monday, August 2, 2010

Week 11

To summarize were I was at the start of this class is easy - I have had no previous experience with the technologies I use beyond the most elemental level, I knew how to turn on the computer and that was about it. I relied on the software to make all the decisions and distinctions for me. Over the weeks I have had a massive expansions of my knowledge, in good ways like learning about Linux, command lines and PHP which fit into my librarian mind logically and well. It would seem like MySQL should as well but I have a major stumbling block with that section and am having a hard time ingesting the information. Still the difference from start date to know is immense and very satisfying. What I am interested in seeing is what information will remain useful after the class is done and I am not in my 'student' mode.

Wednesday, July 28, 2010

Week 10 - maybe a little whiny

While working on the database sections, I was reminded of a special project I was part of about two years back at my public library. The cataloging department had a massive backlog of items from the transition to outsourcing for the cataloging and processing incoming items. It was not a smooth transition and for almost a year there were problems with a sizable percentage of incoming items. Most of these were failed MARC records that due to one problem or another, weren’t working in the Millinium system. I was hired for about ten months to find, update or replace the record as appropriate. If you ever really want to learn cataloging rules, look at several hundred bibliographic records every day and try to find what is wrong with them. Why I mention this is that we needed to spend the least amount of time on each record as we possibly could while still allowing the public to use them and allow us to track carefully for reimbursement. For our internal needs, we wanted to be able to track every last one of them and each failed record had to include several data points that could be used for various retrieval queries; date, format, error type(s) , record id #1 and failed record id #2. By looking at what was wanted at the end, we worked out the minimum of what we needed to do to each record. It generally worked but it was never more than adequate and I have never really understood why.

Having no experience with MySQL at the time I am finding it interesting to see the parts that we did that matched up with the logic behind MySQL and to contrast that with the areas where we didn’t match. By focusing too much on the most immediate need for the changed records – which was to prove we should get money returned, we didn’t do a very good job of organizing the failed records into other useful tables that could have helped us later on. Getting the concepts behind MySQL and it’s query levels has given me some ideas as to where the problems might have been.

I used Wikipedia as a MySQL database example in this week's discussion post and do not feel I did a very good job on the assignment. The words are just out of reach. I read too fast on a subject I am not easily understanding and I will have to slow down and think this through again when I can devote a real block of time to it. Databases feel like the section that will never end.

Wednesday, July 21, 2010

Week 8 - again

It is an interesting question as to how my involvement with future technology planning will be changed by my involvement with the Digin program. Since one of my goals for joining the Digin program is to re-position myself professional and possibly move out of the public library sector. So my previous experiences may have little to no bearing on my future. What can be stated with certainty is that Digin has taught me a great deal about the current state of the profession and its most pressing needs as well as a better understanding of the nuts and bolts necessary behind the scenes. Both of those facts can't help but color my future in terms of interest in technology management and planning.

Tuesday, July 20, 2010

Week 9 - it was hard!

With the ERD diagramming project this week I kept getting caught up in the differences between a one to many versus the zero to many relationship between entities (I am also praying there is a difference between these and a many to many relationship ). I thought I had a fair idea after my initial read through but when actually diagramming kept getting hopelessly confused. Every time I tried to begin again I worked out a different diagram. Eventually, and I know this sounds obvious but it really did take me some time, I realized I needed to define some relationships myself. Some of the relationships were pre-determined by their nature but several could be open to interpretation. Which is why it is called a planning process. I can see how other people would come to different conclusions and that these differences might make for significant differences in end-user results. My final ERD diagram tries to make my decisions clear and I either did it or I have mucked the whole thing up. Returning to the readings while I was working all this out actually did not help me much. The Mostafa tutorial on tables ultimately was the basis of my Eureka moment. I am curious if others found that there seemed less uniformity in the Crow's Feet notation than I was comfortable with. For example I couldn't tell if the flat or dotted line between entities (shown in the Perfect Fit article) was universally applied as I did not see it in other readings or examples I found on the web. I think experimenting with different types of information and uses, looking over examples and a great deal of experience are required to reach any competency in this field and has given me new respect for database designers.

Tuesday, July 13, 2010

two kinds of tech plans

As I was digesting the readings for the week it occurred to me that it might make the most sense to have a kind of two-faced technology plan, meaning embedding a useful one into a generic plan so as to meet all the necessary goals. On the one hand, a technology plan can be a useful guide for laying out an organization’s goals and technology philosophy as well as a way to inspire and invest staff, stakeholders and the public. On the other hand a technology plan needs to inspire and involve the very disparate groups of staff, stakeholders and the public and making all three happy can be a tricky business. As Michael Schuyler writes, most people don’t know nor care to know the intricacies of how technology is managed; they just want it to work when they turn it on. So for them you give a generic technology plan that satisfies on the surface level. But for the purpose of actually knowing what to do, who should do it and how to pay for it for people who really do need and want to know a realistic technology plan is a necessity. One that involves elements I do believe are important to the health and vitality of a library like the concepts behind environmental scanning. Some kind of double speak might emerge where behind the generic topics, useful information is coded for those who are truly listening. Also I can’t decide if the idea of peer reviewing technology plans as Schuyler suggests would be a brilliant or if it is like comparing apples with oranges and the potential wide range of differences between libraries make it essentially useless.

Wednesday, July 7, 2010

How I learned xml and grew to love youtube

One of the learning tools that has come as a surprise to me is youtube. I frankly thought youtube was for ephemera - like a sneezing panda or a kid getting his finger bitten by his baby brother. I did not take it seriously, no spend much time in the youtube playground. It has been a major find for resources in the technologies that support the digital library. The latest case in point is for XML instructions. The tutorial that ultimately was the most useful to me was the Just Enough XML to Survive because it condensed the ideas into its most pure forms and then strung them together, showing the underlying logic. Plus, the voice was great, the presentation was clean and it all was aesthetically well done. I wonder how video teaching is different than classroom teaching and if university professors are able to make the transition. I poked around for 'amateur' tutorials and it was very impressive the number and the quality. The Four Minute series by Zlantorb is a good example of a kind of flash card system for learning and refreshing memory. As I am not a big fan of the 3 schools I was most happy to have found other options and often surprised at their quality. Youtube will not be overlooked as a tool any longer.

Wednesday, June 30, 2010

a little knowledge

I know just enough about HTML to be dangerous. In a previous life I worked on a digital project that required (fairly simple) coding and I felt I had a good grasp on how it all worked. The suggested tutorials were fine, if not terribly inspiring, and provided a good refresher. What I found useful from w3school was the list at HTML4.01/XHTML 1.0 Reference of common tags. As long as the web page does not becoming too ornate I feel comfortable enough. What I am not sure about is the reasons/need for the different Declarations.

p.s. while all summer colds are evil, I feel it must be said that a summer cold in Arizona when it is over 110 degrees is just hell. Luckily, I can't contaminate any of you, my virtual classmates.

Tuesday, June 22, 2010

Teaching

The other side of learning is teaching. I only have the DigIn program for experience and am curious as to the whats and hows others have experienced of good/bad teaching for online classes. For myself, I like D2L so far and have had no real technical issues and the classes have been interesting and well planned. But I still have the mindset that a classroom would be better. I wonder if I will change my viewpoint as the classes progress.

Learning style & Myers Briggs

Unfortunately a repeat of what I posted in the discussion blog - I hope to add more to this after experimenting a little on my ideas of prioritizing.

The learning style article was both insightful and timely. It reminded me a great deal of the Myers-Briggs test I took a two years ago when I recognized that I wanted to do something new in my career. It was an eye opener and I reference it in my daily work life quite a bit. The idea that I am INFP helped give me perspective on why I don't do well having to immediately respond to emails; I just don't think that way. And knowing that, I can compensate in some ways and make my job less stressful. After reading the Felder and Soloman article I worked out what type of learner I am and I think that will help me find similar types of compensating techniques. As I have complained ad nauseum, time management and keeping up is one of my largest problems. It seems like understanding more about how I learn will allow me to better optimize and organize each week's time frame and be more efficient.



Wednesday, June 16, 2010

trials and tribulations

This week was a reminder to stay focused and humble. Previous assignments in Ubuntu like the downloads have gone well for me and have been without significant problems. This time, because of personal plans over the weekend, I have been running late and have had problems with both Ubuntu and the sandbox. Help from the activity discussion board has been great and I remain deeply impressed with how timely Professor Fulton has been (really, not in a smarmy way but in a thank-goodness-he-is-around-and-patient-or-I-would-freak way). And boy do I love the snapshot feature! Once in, however, I felt that this week’s topic of user and group creation and permissions made more intuitive sense than previous topics have and I felt fairly confident moving around in the CLI. I also had a big mistake with the sandbox by powering it down when I shouldn’t have. Which I will never do again and which has permanently impressed upon me what the differences are for the two. Which is a good thing.


ETA: have just tried to re-do something from the assignments and it is not working again. Which is not a good thing

Tuesday, June 1, 2010

This blog post is about the assignment for week 2 for class 672. The blog assignment asked us each to comment on how I felt about the teaching videos and tutorials.

After completing the installation of Ubuntu successfully and completed the rest of the assignments for class I looked back to see what worked best for me. Accessing the remote desktop went smoothly for me though I ultimately watched Professor Fulton's youtube lessons several times. The Arthur Griffith lectures likewise were useful as the visuals clarified a great deal of the command line structure and the logic behind it. In fact I shudder to think of trying to learn these concepts without these visual aids. Because I took extensive notes on the Griffith lecture and had to pause frequently, the ten lectures took a significantly longer time than the estimated 1 hour. The six lessons in Look Around the File gave a overview of important topics in the basic structure of organization. I particularly had an a-ha! moment with the codes for granting permissions. Areas I will need to go over again are the grep. The Intro to Unix Command Line structure gave a useful (and brief) description of key concepts. The tutorials from LinuxCommand proved the most difficult to follow. Most of the commands and concepts covered by Arthur Griffith made sense to me and practicing the LOM examples solidified the ideas. Areas I had difficulty with were over the I/O redirect commands and/or using pipes and the kill command. Likewise job control needs more work. I hope to go over the tutorials again next week.

I also have to say that I don't get why techguys do things like take the e our of user. I refuse to believe that skipping one key stroke is a truly valuable time saver.

Monday, May 24, 2010

Ubuntu forum

My observations of the Ubuntu Forums Absolute Beginner Talk are not in any particular order.

It seems clear that the community, even at the beginner level has a specific language in use that I will need to get a handle on quickly to understand the questions and the answers. Examples include phrases like debian and partition which are repeated often enough that they are clearly key concepts. I poked around some but did not find anything like a glossary on the forum site. Anyone know of one?

The post on malicious commands made almost no sense to me and brought home the reality that a forum community is based on trust. It is like relying on the kindness of strangers. Caution and patience both seem like necessities for getting the most out of this type of forum.

I spent quite a bit of time looking over the organizational set-up for such a large forum. My previous experience with forums have all been on a much smaller scale and the needs of such an active and large community greatly alter the framework of administration. As the forum is volunteer run, not only do staff offer their expertise and time but they must have good negotiation and management skills. I am thinking specifically of thread steering as being something that requires tact and negotiation. All done online, no direct meetings or even telephone calls to personalize communications.

The forum is stated as being the largest Linux/GNU support forums and also that it was started by a single person only 2 years ago. That is pretty astonishing to me. Or is that a typical start-up for these types of forums? I get that a forum grows because it becomes the place everyone goes because that is where everyone is, but how does it get to that weight? Again the issues of trustworthiness and strong management must play a part.

Familiarity will make this tool more useful to me so for now I need to play and poke around to see how it works some more.

Thanks,
Bethany

Monday, May 17, 2010

ground control to major tom

Test entry for bler's blog as part of assignment for DigIn class 672.

Five minutes ago I got off the phone with a friend of mine who (gasp!) only has a landline phone. Refuses to have a cell phone. He claims not to be a Luddite but rather a 42 year old man with no desire to be in constant contact with people. So maybe a hermit is a better description - a digital hermit. And I respect his decision even as I start adding to the digital tsunami with my first blog.

Curious - is there any subtext to which blog one uses? If one choses blogger, does it have a different connotation than choosing wordpress or livejournal? Is one considered more serious than another or is there demographic divide like age that differentiate between them?
Thanks for reading
Bethany

Bethany