ePortfolio: Dr. Simon Priest

TECHNOLOGY & VIRTUAL TEAMING

 

 

EMERGING TECHNOLOGIES: The normal distribution, with +/- three standard deviations or SD units, shows the approximate percentages that represent the diffusion of innovation. My experimental work with technologies has been in the left sixth of this curve. By today, examples of an emerging technologies would be atomic storage (now in research and development) and wearable devices (mostly innovators and early adopters). Other well-known technologies have already emerged. Today, MOOCs have already emerged (used by an early majority) and online learning is well established (now being discovered by the late majority).

 

 

EDUCATIONAL TECHNOLOGY PREDICTIONS: Since 2006, I have been making annual predictions for how emerging technologies will make a difference in Education. These are the latest (2016) taken from this slideshow.

 

 

TECHNOLOGIES AND THE HYPE BUBBLE: Partly driven by marketing exaggerations, educators often have unrealistic expectations for emerging technologies and some see every new flavor as a panacea for what ails learning. Initiailly enthusiastic about new technologies, they are soon disappointed by poor product performance. When driven in this manner, a hype bubble develops on the upslope of the normal growth curve (measuring acceptance over time). This bubble shows accelerated growth due to unbridled enthusiasm, followed by a drastic drop due to disappointment leading to disillusionment with that technology. Several educational technologies are positioned on these curves with estimated times to reach widespread and mainstream public acceptance. The hype bubble and curve are based on Gartner's Hype Cycle for media and technologies.

 

 

HISTORICAL EVOLUTION OF WEB 3.0: Toward the end of 2014, I was asked to share my thoughts on how Web 2.0 would evolve into 3.0 and what the future of the Internet would look like. Using a version of the illustration below, I chose to share a positive perspective of how this tool would help solve our global crises. This diagram traces: the history of popular technologies, the Internet's evolution (from initial military weapon through online school to eventual virtual makerspace), with corresponding iterative versions of the web.

Amazon and Netflix are good examples of commercial operations that have spanned all three iterations. In Web 1.0, their webpages were simply lists of the products and services they offered. In Web 2.0, they became a social community by asking members to review and recommend products and services. In Web 3.0, using personalized and customized algorithms, they began recommending products and services based on each user's behavior as driven by semantic data from the advice and behaviors of previous members (others also bought this book and we think you'll enjoy this movie). A natural progression in the third version will be to encourage users to co-create new products and services based on the recommendations and gaps identified by prior members.

Web 3.0 is "semantic" or data-driven where the relationships between two datum also become data. This will enable us to search semantically, which means that the language we search from becomes a modifier of our future searches. We see this to a slight degree currently, but in the future the influence will be considerably greater. Not only will the search process customize to the user (if you often search for medical terms, your future search results will have medical orientations, until you search otherwise), but the organization of web information will also adjust to the user's preferences (medical data will be more closely connected within itself and to you as an individual user). Beyond this, the search terms will not need to be precise keywords, since metaphor and synonyms will automatically be considered in the search process. Our language can be looser and more human.

Artificial intelligence will figure prominently in this iteration as more accurate search results will be returned because computers will begin to understand information contextually, just like humans. This means that the Internet will become your personal assistant, and get to know you better over time by incorporating your likes and dislikes into your online interactions. Language translation will become instantaneous allowing us to speak with other user's in their native tongues. Emotional interactions between humans and computers will be possible with brain implants and facial recognition used to read our emotional states.

All of this means that we will be more able to work together, than was ever possible before Web 3.0 with project management, and this will lead us to crowd-creation. Due to this expected enhancement of searching information and emotional interaction with one another and/or other devices or appliances online, collaboration will move toward co-creation. We will be able to invent new ideas and improve on those ideas to the point of solving real world problems like climate change. Co-creative innovation will not necessarily disrupt industries, but ought to result in jointly treasured solutions with value for all, freedom from conflicts of interest, and independent of market influences.

 

 

ONLINE CROWD COLLABORATION: I’ve become a strong proponent of online crowd collaboration as a possible method to solve the really big problems of the world like climate change.  We started looking at collaboration in large teams during my 1997-2005 research with virtualteamworks.com and in classroom crowds during the decade after this with MOOCs, SMOOCCs, and e-learning business degrees at my universities.

At a minimum, we found most collaborative collectives require the means to: communicate, maintain healthy relationships, organize information, manage projects with resources, generate ideas, make decisions, create products, and present results.  The following list expands these nine functions with technologies that can be used to support collaboration.  In this list, items 1 through 5 are omnipresent (widespread across all the other items), while 6 through 8 form a core that is iterative (repeated as many times as needed), and 9 is the final output.  A crowd collaborating online would need tools to cover all of these functions:

  1. communicating synchronously and asynchronously (using email, chat, messaging, discussion lists, screen sharing, and conferencing);
  2. enhancing healthy relationships & building trust (by applying social networking, online team-building, facilitation and mediation);
  3. organizing information (searching files, loading databases, tagging content, tracking versions, mapping concepts, and archiving in the cloud);
  4. managing projects (planning logistics, routing workflow, measuring progress, and celebrating milestones);
  5. managing resources (scheduling time, spreadsheeting budgets, allocating resources, constructing makerspaces, and identifying experts);
  6. generating ideas (imagining prototypes, using creative design apps, recording brainstorms, dictating voice, and translating languages);
  7. deciding among options (building consensus, polling votes, aggregating responses, and supporting decisions);
  8. creating products (writing wikis, editing documents, whiteboarding graphics, developing software, and fixing bugs); and
  9. presenting results (authoring slideshows, using audio or video editors, and see also communication).

As we evolved online from team collaboration to crowd collaboration, we realized a profound need to provide additional makerspace access to our program and course participants. We called this unique combination (of collaborative processes with makerspace labs) our "ColLab" or "Collaboratory" after William Wulf. Scaling this evolution from crowd collaboration to global collaboration will eventually lead to ideas that fix the world’s troubles.

 

 

COLLABORATIVE MAKERSPACES: Makerspaces are physical and virtual fabrication labs where learners can experiment with a do-it-yourself approach to the improvement of existing tools and the invention of new ones. “Makers” support an open-source approach that favours cooperative sharing by making blueprints, sources codes, prototypes, and other constructed gadgets freely available to all makers within (and outside) the makerspace. Tinkering in electronics, robotics, and digital fabricating (with 3D scanning and 4D printing), makers have the potential to co-create impressive solutions, innovate awesome software, and engineer amazing hardware. We found the following resources to be valuable for a truly collaborative laboratory or ColLab:

  1. materials (chemicals, energy sources, electronics, metallics, textiles, and decorations);
  2. tools (extruding, shaping, cutting, joining, measuring, observing, protecting, and moving);
  3. modelling (robotics, digital fabricators, micro-controllers, machines, and engineering toys);
  4. audio lab (microphones, speakers, music synthesizers, and audio editor applications);
  5. video lab (cameras, green screens, slideshow creators, and video editor applications); 
  6. data storage (cloud and fog library of source code, blueprints, and performance results); and
  7. computing (laptops, tablets, printers, and applications for documenting, designing, or coding).

 

 

M.O.O.C. PROS AND CONS: For those who haven't heard it before, the term "MOOC" (mook) stands for Massive Open Online Course.  Massive indicates very large numbers of participants, often due to uncapped or unlimited registration.  Open refers to eliminating participation barriers such as enrollment (no entry requirements means anyone with interest can participate), materials (frequently open educational resources are used and expensive textbooks are avoided), and cost (most are free).  Online suggests a delivery method employing technology that ubiquitously serves up publicly accessible information.  Course elements include self-paced learning (within a reasonable schedule), a fee for credentials (credit or certificate), and a single instructor (usually a professor of elite standing).  A typical MOOC involves watching a captured video lecture or listening to an audio podcast, linking to additional open education resources on the web, and following-up with multiple choice quizzes or other assignments such as posting work in a blog or e-portfolio and conversing with peers in a forum or chat room.  Interrelation among participants is mostly cooperative. Therefore, MOOCs represent a change in scale and not necessarily pedagogy.

MOOCs began emerging in 2008 with the first such labeled course offered from Canada.  Driven by the desire to examine further cost cutting potential, several Ivy League universities joined in.  By 2012, dubbed the “Year of the MOOC” by the NY Times, MOOCs were fully emerged with several early USA-based competitors like: Udacity, EdX, Coursera, Udemy, Kahn Academy, etc.  Here is a summary for some pros and cons unique to the MOOC.

 

 

MOOC vs. SMOOCC: In 2012, aware of some of the notable shortcomings of MOOCs and the need to improve on them, I experimented with a Synergistic Collaborative version that I called a SMOOCC (smook).  Since I was teaching E-Facilitation, a course I had frequently taught online from 2000-2005, I believed this variation should similarly practice and be highly facilitative in situ.  I added more sophisticated communication tools, enabled the formation of interactive study buddies, groups and teams, encouraged transdisciplinary collaboration, and added some more bells or whistles. 

In addition to the internal communication tools of chat rooms, discussion fora (forums), blogs, and e-portfolios, we added outside communication tools and collaboration applications such as: wikis, tweets, social networking media, Rich Site Summary feeds, video conferencing, web conferencing, document sharing, and several other highly specialized collaborative applications.  Since much is lost in translation, study groups were formed around common native languages so as to enable effective communication of ideas (this unfortunately reduced cross cultural exchanges: an obvious MOOC benefit).  The techniques of transdisciplinary collaboration, problem solving, and creative thinking, were taught and practiced to co-creatively solve practical problems related to facilitation. In doing so, participants self-facilitated their own processes by applying new learning to further next learning.

Lectures and demonstrations were live, instead of video prepared.  However, these were captured and archived by the sponsoring organization for re-use by participants who missed the original or were dispersed in different time zones.  Instead of presenting my singular view of concepts, I supplemented the presentations with explanations of many guest facilitators from online communities around the world in their own words (with simultaneous translations).  Participants were asked to source, interpret, critique, apply and evaluate information. They discussed, aggregated, replicated, remixed, repurposed, reflected, published, distributed, and translated their findings. Participants coached and peer-assessed one another's facilitation performance, as guided by complex rubrics. However, this still needed some improvement: we need to find better ways to evaluate learning in the massive environment.

Participants engaged in both synchronous and asynchronous learning experiences that were interactive and collaborative. Participants had study buddies. Study groups were purposefully formed from these partnerships. Several study groups were combined into collaborative teams.  Each team was provided with a different real-life facilitation problem that was plaguing the sponsoring organization.  Participants were asked to solve it by initially working together in study groups and then again in the larger collaborative team.  Study groups co-created initial ideas and then cross fertilized several of these ideas into a single solution from the collaborative team.  Teams presented their solutions to the entire course and these were refined by input from any participant.  The refined solution to each problem was implemented by the sponsoring organization.

Participants paid a small fee to enroll, although the sponsoring organization paid all costs associated with course development and delivery, including a textbook for each participant.  If a participant completed the course, they received a full refund.  If not, the fee was donated in the participant’s name to a charity associated with the sponsor.  This increased retention with over three quarters of the class completing and more than half passing the course.  Eventually, many participants met in person to socialize and further their facilitation development.

Additional motivation was provided in the form of regularly updated feedback, progress, games, and rewards.  Each participant received feedback from peers and by automated grading after every synchronous or asynchronous session and all were tracked by myself and several teaching assistants.  The progress of each participant was continually mapped or measured by completion of specific competencies and these data were also tracked by us.  In combination, these two added motivation to seek additional learning and/or practice to gain competence.  Gamification was helpful for some content, while rewards worked for others.  Following a competency framework with a stamped passport approach, rather than points or badges, allowed participants to see their path to completion without competing with co-learners.

This SMOOCC was similar to a "connectivist" MOOC (rather than the "extended" MOOCs of commercial providers) in that new learning was connected to old learning and the learning and connections were managed through other connections among people in a social network. The SMOOCC is unique in that it purposefully structures for collaborative connections and maintains these through the practice of facilitation (the point of this particular course). Therefore, I think my next step will be to experiment with delivering a more openly facilitated SMOOCC on Innovation and Sustainability.  Perhaps pre-formed intact groups will participate and make a practical difference or contribution in their local parts of the world.

In conclusion, a great deal of criticism has been leveled at MOOCs.  This is common for all potentially disruptive innovations in education.  Given time, I am confident that MOOCs will become more sophisticated in ways that parallel the past evolution of touted potential disruptions such as: online learning, technology enhanced face-to-face instruction, experiential education, television, radio, and early organized classrooms.  Whether MOOCs reach their full potential for disruption and eventually challenge the credential delivery of institutionalized education remains to be seen.

 

 

THE ELEARNING SOPHISTICATION SCALE: I began measuring elearning quality and had crude scales for sophistication and adoptive criteria in 2011. A few years later, Bryan Fair, my Head of eLearning at BCIT, helped refined the scale for determining the degree of sophistication in the format of online courses. The values range from 0=static pages to 10=Adaptive Interaction (this dynamic scale will change over time as new technologies emerge). Note that most current university courses are "canned" (0 through 2) and are simply poor quality education from the classroom ported into the online learning environment. On the flip side, I was extremely impressed with the 3D modeling functions (9) that we had at the Learning and Teaching Center and how we could create a jet engine model or human anatomy that learners could practice with on their mobile devices before they went live in the labs. The cost savings in wear and tear of parts and organs paid for the 3D model development.

Adjacent to this scale is a little acronym we used to remember the selection criteria which ought to be applied to any new technological tool under consideration for adoption in elearning. STUDENTS should come first!

 

 

HALLMARKS OF THE E-LEARNING CAMPUS: So far, e-learning has evolved through three generations. The first generation was all about one-way transmission of information for the simple delivery of knowledge. The second generation improved to become two-way or transactional with the interactive exchange of information. Yesterday, learners could ask questions, break out into private discussion spaces, and provide emotive feedback (I’m bored, confused, or left behind). The third generation, being developed now, is becoming more collaborative in process and content. Today, it allows learners to teach one another, complete specialized group combination projects, and co-create team solutions to problems posed by their teachers. When advising higher education institutions on setting up their E-learning campuses, I tend toward the future and present them with this model that emphasizes the collaborative third generation. As diagrammed in the third iteration of a Sierpinski Triangle (a fractal triangle divided into composite triangles only three times), the model has three important pieces highlighted in yellow:

  1. A. Tools for COLLABORATION, including types of collaborative behaviour, kinds of collaborative groups, and ways to share information to enable collaboration;
  2. B. SUPPORT services, such as the help desk, bookstore, library, career planning, and others; and
  3. C. PLATFORM resources, like an online Learning Management System (LMS), and tools for synchronous (same time) or asynchronous (different times) instruction.

This model really emphasizes collaboration, because this is the next missing piece that is presently being addressed by programmers. Each of these three pieces is further broken down into three components highlighted in orange, each with triple blue contributing elements.

  1. Behaviour and interaction during collaboration require mutual trust, respect, and reciprocity; should be ruled by self-determined guidelines from group norming; and follow a process that is best moderated by a neutral and objective facilitator.
  2. Groups for collaboration may range from study cohorts that may teach each other, through problem solving teams that address real world case studies, to a community that assists one another with technology concerns and instructional issues.
  3. Information sharing includes various methods of communicating content, different means to monitor and organize data for tracking the progress of collaboration, and assorted techniques for co-creating, editing, and publishing collaborative outcomes.
  4. The help desk offers troubleshooting services for learners’ technology issues, develops self-help resources for them to consult before making contact for additional assistance, and treats all of their contacts and needs with a customer service attitude.
  5. The bookstore and library work together to provide i-books and e-texts, give access to online journals and reference materials, and keep the repository of learning objects and open education resources that are reused by faculty in their instruction and course design.
  6. The other services that must be addressed are career planning and internship placements, storage space for the creation of personal e-portfolios, and plagiarism detection testing for learners to check their assignments for “re-used” content before submission to be graded.
  7. The LMS must be: robust enough to function in an online environment beyond classroom work, able to assess learning, interactive with users, adaptive to learning progress, able to track and report gains, and supportive of curriculum development by faculty and others.
  8. Learners should be provided with access to, and training in the use of, asynchronous tools such as: solo web browsing, email, a discussion list or forum, video with accompanying audio, and voice mail (phone messages) as a minimum to attend e-learning courses.
  9. In addition, they should receive access to, and training in the use of, asynchronous tools like: group co-browsing, chat, messaging, voice over internet protocol (online telephony), and web-based video conferencing as a minimum to participate in an e-learning campus.

 

 

MEASURING ELEARNING QUALITY: In 2012, I identified 8 critical factors to determining the quality of e-learning and each factor has a different method of evaluation. The factors and their respective evaluation methods are:

  1. PLATFORM: apply the selection/adoption criteria (thermometer above) to assess the quality of the LMS being used
  2. SUPPORT: examine the technological infrastructure to judge its efficacy around speed, latency, capacity, bandwidth, etc.
  3. DESIGN: use the Standards from the QM Higher Education Rubric of Quality Matters to appraise the course design
  4. DELIVERY: employ process (formative) and outcome (summative) evaluation to measure course delivery quality
  5. CONTENT: get opinions from a panel of expert stakeholders: professionals, faculty, staff, students, and alumni
  6. FORMAT: apply the sophistication scale (0-10 ruler above) to determine how well the course content is formatted
  7. STUDENT: ask learners to conduct a self-review of their readiness and preparation to learn in an online environment
  8. FACULTY: ask master teachers to peer-review their colleagues during course delivery and online instruction

 

 

DEFINING DEGREES OF TECHNOLOGY: When folks speak about eLearning, they tend to think only about online forms of learning with technology, but I believe that eLearning also includes the technology-enriched classroom and the blending of fully online with technology-enriched. In other words, eLearning is the spectrum below of the three columns on the right. One important area of teaching that is often excluded from consideration in eLearning is experiential education (fieldwork, labs, internships, etc.) and so the definitions below (from 2009 work with my Dean of Faculty, Joe Lucero) include experiential elements like labs and field work.

 

 

LEARNING ANALYTICS: Like any form of evidence-based inquiry and/or research, learning analytics is simply the measurement, collection, analysis, interpretation, and reporting of data that are useful in understanding the way people learn and in discovering the best experiences and environments for learning. Generating the large amounts of data necessary to run these analyses, and subsequently drive decisions, requires deep mining of big data sets, while simultaneously retaining students' rights to privacy.

These days, with very large data sets and "Cloud Computing" power availability, ACADEMIC analytics (as above, but for administration beyond teaching and learning) has been common. In fact, when I was an academic administrator, my laptop's background screen was a dashboard displaying real-time aggregate data on enrollment numbers, retention percentages, probationary or at-risk students, and unsatisfactory academic performance. I was also able to get a snapshot of faculty performance and online teaching behaviors. These academic analyses permitted my staff to intervene in problematic situations and apply solution-focused advising methods.

Recent advances in "Fog Computing" have served to make LEARNING analytics increasingly possible and this has led to adaptive learning (where curriculum contents and teaching delivery methods change with the needs of the learner). Now when I teach a course, albeit online, I get similar analyses sent to my phone about students who have not completed assignments, like posting to discussion boards, have performed poorly on problem solving quizzes, or have not contributed equitably to their team-based projects. This helps me redirect my attention to their learning and make the adaptive experience more personalized for them.

I drew this diagram, in 2013, to help faculty understand the main differences. The benefits of the Cloud are obvious: cost savings, independent security, great scalability, high performance, no maintenance, efficient peak load sharing, reliable, agile, flexible, and mobile.  However, some drawbacks include: lengthy processing times (especially for big data), large bandwidth usage to upload and download between the Cloud and personal devices, and slowed delay times due to remote locations of the data center. In a symbiotic-like relationship, the personal device becomes the input/output interface between people and the Fog/Cloud network.  Fog computing provides: real-time data processing, temporary data caching, and computational offloading.  The Cloud offers: machine learning, massive parallel data processing, and mining or management of big data sets.

Note that Higher Education, in this diagram, is the Ivory Tower Lighthouse scanning the fog and clouds for four kinds of analytics (higher education, institutional, academic, and learning) applied over four levels (government, executive, administrative, and teaching) and used to describe and diagnose or to predict and prescribe. My work has been mostly at the two lower levels: academic analytics and learning analytics.

 

 

WEARABLE TECHNOLOGIES: When I was running and hiking at university, I had a calculator watch that I used to figure out how long, far, and fast I had gone. Since then, my sister has had a succession of insulin pumps to counter her diabetes. The pumps administered the correctly calculated dosage of insulin required. Later ones measured blood sugar levels and more recent ones provided output to her smartphone. These are very early examples of wearable technology: computing devices that are attached to, or integrated within, the human body.

Wearable technologies achieve at least one of several purposes: informing, entertaining, identifying, controlling, monitoring, and protecting. Like all modern computers, these devices inform (access navigation or climate data) and entertain (record or play video or audio). Unlike some computers, these devices identify (keep or biometrically access passport files or concert tickets) and control (wallet finances or body functions). Uniquely, these devices monitor (measure heart rates or exercise impact) and protect (administer medication or manage stress or pain). To these ends, I have been helping a few fashion designers (students of my fashion designing spouse) to explore these new opportunities with smart/e-textiles and so I prepared this infographic to cover their range of options.

 

 

SYSTEMS DEVELOPMENT (6D) PROCESS: Many of these "life-cycles" exist for software engineering. This is ours. In the late 1990's, working with John Chen (patent holder for Microsoft Exchange) and Brandon Albers (Programmer Extraordinaire) on virtualteamworks.com (described below), we adapted this process for our early collaborative work developing software and hardware systems. The 6Ds are:

  1. DEFINE = Assess needs by multiple sources and methods to create blueprints that specify all sponsor requirements.
  2. DRAFT = Engineer by coding (for software) and/or constructing (for hardware) prototypes of the system (either/both).
  3. DEBUG = Fix functions that don't work to attain a final working product for release (over the immediate short term).
  4. DEPLOY = System is put into play, where sponsor provides feedback and data are gathered on system performance.
  5. DEEPEN = Long term improvement of system (by repeating 1 - 4 in order) to create new versions with increased value.
  6. DISPOSE = The criteria underwhich a system is retired and how past data generated by that system are then handled.

Since we have found this approach useful in developing systems beyond hardware and software, I continue to teach it in higher education and management situations and apply it as a variation of organizational development, where the enterprise is a system.

 

 

GEOTEAMING: Initially, John Chen and I met near Seattle around 1997 and formed virtualteamworks.com with several others. John was an ex-Microsoft employee with some software patents and technology success stories behind him. He wanted to do more in experiential training and I wanted to do more with technology. So in those days we traded a lot of great ideas. In early 2000, he was looking for a team-building activity he could do with very large groups. From one of my books, I suggested using GPS receivers to locate clues in a kind of orienteering journey and we came up with GeoTeaming: GPS-based scavenger hunts for competing and cooperating groups.

At the same time, GeoCaching was taking off in the USA and just happened to be based in Seattle. In late 2000, John and I met with the GroundSpeak team (managing the movement) and got their support and blessing for the idea. For the next four years, we divided our attention between virtualteamworks.com and GeoTeaming.com, until the sale of the former, when John solely took on the latter and grew it in a big way. Today, every team-building company has some form of a GPS game that copies some (but not all) of what he does.

 

 

VIRTUALTEAMWORKS: This company was about a decade ahead of its time. To see what it was all about back then, check out this flash presentation from 2000.

Content on this page requires a newer version of Adobe Flash Player.

Get Adobe Flash player

 

 

TEAM-BUILDING: I've been building teams since the early 1980's and feel like I have experientially found a progressive method that seems to work for most groups. I liken the process to that of constructing a house. Initially, the foundation is laid and then the walls are framed atop this base. Next, a roof is added to cover everything and the finish touches are applied. In this house metaphor for team-building, the foundation is composed of trust, communication, and cooperation (without these three as a solid base, all subsequent development efforts can fail). The walls are the elements of a high performing team (these will vary by situation, but the partial list shown is a good start). The roof is leadership with its qualities (again these vary by situation with some examples shown). The finishing touches are empowering the team to grow independently.

Foundation: I start with trust and communication. These are reciprocally related and erosion of one erodes the other. However, improving one does not necessarily mean the other improves consequently. Therefore, I tend to concentrate on learning experiences that alternate their emphasis from one to the other until both trust and communication have reached functional levels. Once this first milestone has been passed, I search for some demonstrated evidence of cooperation (conditionally sharing some resources and operating together to change something for the benefit of selected members) and enable the transition of this toward collaboration (unconditionally sharing everything and working together to co-create something new for the benefit of all members and/or others). The appearance of sustained collaboration indicates passage of a second milestone.

Walls: With this foundation in place, and the second milestone behind us, further development can occur with the many elements of teamwork, such as: planning, role clarity, common goals, change, commitment, problem solving, decision making, risk taking, diversity, safe space, co-creation, conflict resolution, creativity, consensus, accountability, respect, encouragement, sharing, ethics and empathy. I tend to independently address these one at a time with simple group inititative activities and other challenges, and then test these elements in synergy using more complex group initiative activities and challenges. In the growth of most teams, they typically tend to reach a breakthrough moment or tipping point where their resistance to change falls away and they embrace their new levels of functional performance. This third milestone signals an easier transition toward functionality and suggests the time to address leadership.

Roof: Once the team is performing well and most behavioral elements are functional, I tend to deliberately shift leadership around to different members by purposefully selecting leaders for specific tasks. By doing so, the team learns to share leadership based on the strengths of its members. With a functional and high performing team, we can begin exploring leadership components such as: influence, style (autcocratic, democratic, and abdicratic), orientation (task or relationship), and conditional favorability (group unity, team competence, leader experience, decision consequence, and environmental risk).

Finishing touches: After the fourth milestone of functional leadership has been surpassed, I like to leave teams empowered and independent. Sometimes this means teaching them to self-facilitate and/or continuing to develop without assistance. On other occasions, this means helping them deal with their issues as they arise and without the need for a facilitator to enable their processes.

 

 

TEAM RESEARCH: As Executive Director of CATI, my colleagues and I conducted many studies on the efficacy of experiential teamwork training and leadership development. For a more detailed summary of the research studies (1990-1997), download the 30 CATInate Abstracts. Here are some of the more interesting outcomes.

Virtual Teaming: Virtual teams showed significant improvements in their teamwork over a six month period after participating in an online team-building program. Six component subscales of teamwork (trust, communication, cooperation, problem solving, decision making, and tasking) also showed similar increases.

The purpose of this study was to track changes in teamwork and some of its composite elements over time for virtual teams engaged in an online team-building program. Three experimental groups (n= 4, 6 & 9) received the treatment, while two control groups (n=7 & 8) did not. The treatment was a four day (30 hour) teambuilding program conducted entirely online and composed of group problem solving tasks followed by debriefing discussions focused on improving virtual teamwork behaviors. Subjects (N=34) were members of five new virtual teams, located in 9 nations across Europe, speaking several languages and working in English within the manufacturing industry. They completed the vT50 six times: every 2 months over 10 month period with the treatment delivered between the second and third completion of the vT50. This valid and reliable instrument measured teamwork and six components on an interval scale from 0 (never) to 100 (always) for fifty self-report behavior frequencies as observed in other team members.

Since no significant differences were found between the two control groups or among the three experimental groups by non-parametric Kruskal Wallis Test, these were combined into a single control (n=15) group and a single experimental (n=19) group for parametric analysis. Kurtosis, Skew, and Homogeneity were found to be within acceptable limits of normality. Two-way (2 groups by 6 tests) Analysis of Co-variance used initial testing outcomes as the adjusting covariate and Scheffe post hocs to determine specific differences. The ANCOVA demonstrated interactive effects indicating that all observed changes in teamwork and the six subscales between the control and experimental groups occurred immediately after and were due to the online team-building program. This virtual teambuilding program was effective at raising teamwork by 27 points on a 100 point scale, but without follow-up programming to sustain those changes, and to fortify the virtual team, that gain ended up losing 5 points in the months that followed.

Face-to-face Teaming: These ten selected studies (1-10 from top left to bottom right) overview some of the research and its outcomes.

  1. Experiential learning about teamwork was more effective than the classroom learning at improving teamwork, where the classroom outcomes were short lived and the experiential outcomes lasted about one year before returning to baseline.
  2. For team building programs to be effectively utilized back at the office, they should be conducted on intact work units, rather than random samplings, and resources should be dedicated to encourage practice of team behaviors after return to work.
  3. Follow-up procedures have a significant impact on transfer of learning, where most gains appear to erode after 6 months, but self-facilitating teams (those that learn to analyse success and failure like a facilitator) continue to improve their teamwork.
  4. This type of programming may help companies to change their motivational climate with shifts from an autocratic bureaucracy (control-expert influence-dependency orientations) to an empowered team environment (achievement-affiliation-extension).
  5. Sequencing of group before individual activities was critically important to creating effective teamwork, because an inappropriate order of activities (individual before group) can actually retard the development of a high performing team.
  6. Program duration impacted teamwork development (several short programs provided slower greater gains, while one lone program of equal contact time provided quicker lesser gains), while program setting (camp vs. hotel) didn't impact teamwork.
  7. Program design impacted teamwork development (custom tailored programs appeared to provide greater and more sustained gains in teamwork, than off-the-shelf programs), while program location (indoor vs. outdoor) didn't impact teamwork.
  8. Combined staff paires of adventure facilitators paired with corporate trainers appear to provide the best organizational team building outcomes in Corporate Adventure Training programs when compared with either facilitator or trainer pairs alone.
  9. A mix of metaphoric debriefing (first half of program) and isomorphic framing (second half) shows the greatest teamwork acquisition and retention when compared with all isomorphic framing, all metaphoric debriefing, and no debrief or frame.
  10. Problem-focused and solution-focused approaches appear equivalently effective at increasing teamwork in functional groups, however, for dysfunctional groups, problem-focus was less effective and solution-focus was enormously successful.

An additional stream of study examined TRUST and its 5 subscales (A=Acceptance of others' thoughts and ideas, B=Believability that people are genuine during our interactions, C=Confidentiality of my feelings or emotions by others, D=Dependability of others for getting the job done, and E=Encouragement of others while taking risks)

  1. Linear Stepwise Regression Analysis predicted Overall Trust from the 5 subscales (AECBD) with 48.2% explained variance.
  2. Physicality influenced the development of trust with greater gains in D and E for physical activities than non-physical ones.
  3. Using clients to belay develops trust between partners (ACDE) better than using facilitators or technicians (which reduced B).
  4. The ropes course had a profound effect on the enhancement of self-confidence. Specific debriefing (focused on self-confidence) was more effective than general debriefing (about various process topics) for ABC, but not DE.
  5. Both group initiatives and ropes courses were effective at improving organizational trust and BCD, however, group initiatives were better at enhancing A than ropes courses and ropes courses were better at enhancing E than group initiatives.
  6. Touch plays an important role in the development of interpersonal trust in groups and ABC, where males and females view trust differently and they gain and lose trust differently in relation to touch.

One other study of merit was ahead of its time. In response to industry concerns about rising cardiac arrests for males on ropes courses, we were able to predict their highest heart rates attained from an equation of their age, height, weight, body girths, time to walk a mile and heart rate after walking a mile with 64% explained variance:

  • Highest heart rate = 192.731 + 0.521 (Heart rate after mile walk) – 1.039 (Age) +
    5.818 (Time to walk the mile) – 35.226 (Height) – 68.106 (Chest ÷ Waist)

 

 

TWELVE PRECURSORS TO TEAM SYNERGY: We've all heard the "1+1>2" analogy of synergy. It means that putting the right people together under the right circumstances can lead to results where the "whole becomes so much more than just its constitutent parts." When I was a young facilitator, I asked a collection of experts "what are the right circumstances?" Their answers helped me identify twelve precursors that must be present to achieve synergy in teams and these guided my work well beyond the team-building house above. Where these twelve precursors are present and supported, teams can achieve innovation and greatness!

The initial six precursors relate to the human side of the equation and are no surprise, since many are present in the team-building house described earlier. The team must be functional (high performing, with constructive conflict, but without destructive conflict). Destructive conflicts can be erosive forces against synergy and should be avoided. The team must have a solid foundation of trust, cooperation, and communication. Their process for collaboration must be well facilitated (with agreement on how and when to solve problems, make decisions, reach consensus, use judgement, create new ideas, have fun, enjoy play, etc.). Their leader must be proficient: capable of managing team members' disparate personalities and any destructive conflicts that arise from these. This is difficult to do, since some level of constructive conflict is necessary for innovation. Balancing these two types of conflict is "the great quest" and unbearable burden of most team leaders.

The final six precursors are environmental conditions that determine willingness and ability to synergize. The team has to have previously experienced some form of communal adversity. If they have survived tough times together, then they are more likely to see the need for synergy and the value of innovation. Open dialogue means that different feelings and thoughts can be expressed and received freely and honestly. Safe space refers to a culture or climate that: supports well calculated risk taking, welcomes truth and negative information, and provides opportunities to contribute or dissent without fear of repercussions. Safe space allows for the truth to surface and permits genuine interaction. Exhibited by individuals toward one another, nurturing values of C.A.R.E. (Compassion, Appreciation, Respect & Empathy) are those "golden ground rules for treating others" that sustain healthy relationships within the team. Deliberate diversity is necessary to cross pollinate ideas. If everyone had the same perspective, then you wouldn't need a team: one individual would suffice for decisions, solutions, and tasks. However, one person means team synergy will be impossible and innovation will be limited. Finally, the team needs permission that genuinely empowers them to make a difference. Without this, their motivation will be lacking and every failure will be a fatal one.

We've all seen teams of exceptional people who fail to reach innovation because they lack one or more of these precursors. With the decreasing presence of precursors, synergy becomes increasingly difficult to attain. In turn, these synergy failures costs time, money, energy, and other valuable resources. One of the unmeasured losses in the amount of frustration talented people will feel and how those feelings get reinforced with subsequent failures. Repeatedly frustrated talent ends up leaving the organization (literally and figuratively) in favor of more functional creative opportunities. To prevent this, ensure all twelve precursors are present and accounted for.

s i m o n _ p r i e s t @ y a h o o . c o m
Copyright © 1975 | All Rights Reserved