School webcam controversy In the 2010 Robbins v.
Lower Merion School District case, plaintiffs charged two suburban Philadelphia high schools secretly spied on students by surreptitiously and remotely activating webcams embedded in school-issued laptops the students were using at home, and therefore infringed on their privacy rights.
The schools admitted to secretly snapping over 66,000 webshots and screenshots, including webcam shots of students in their bedrooms.  LANrev software was used in the Lower Merion school district’s student laptop program, overseen by network technician Michael Perbix. In February of 2010, Perbix and other administrators in the district were accused of using the software to take undisclosed and unauthorized photographs of students through the webcams on their Macintosh laptops. The lawsuit was brought by the parents of 15-year-old sophomore, Blake Robbins, who was allegedly accused of illicit behavior seen through his computer’s webcam of him in his bedroom.
The photographs, taken from a laptop that was reportedly not stolen, were then allegedly used as evidence in a disciplinary action. The FBI investigated the incident, and a Philadelphia federal judge intervened to sort out issues relating to the lawsuit.  Perbix had previously praised Theft Track, the name of the feature that lets administrators remotely photograph potential thieves if a computer is reported stolen, noting in a Youtube video he produced that: It’s an excellent feature.
Yes, we have used it, and yes, it has gleaned some results for us.
But it, in and of itself, is just a fantastic feature for trying to—especially when you’re in a school environment and you have a lot of laptops and you’re worried about, you know, laptops getting up and missing.
I’ve actually had some laptops we thought were stolen which actually were still in a classroom, because they were misplaced, and by the time we found out they were back, I had to turn the tracking off.
And I had, you know, a good twenty snapshots of the teacher and students using the machines in the classroom. LANrev’s new owner, Absolute Software staunchly denounced the use of their software for any illegal purpose, emphasizing that theft recovery should be left to law enforcement professionals. They further denied any knowledge of or complicity in either Perbix’s or the school district’s actions.
Absolute stated that the next update of LANrev, which would ship in the next several weeks, would permanently disable Theft Track. Partners • Enterprise Desktop Alliance • Group Logic • IBM • Parallels, Inc. • Web Help Desk • Microsoft System Center Alliance • LiveTime CMDB  References  http:/ / www.absolute.com/  Faas, Ryan (January 9, 2009). “The Top Five Solutions for Mac/Windows Client Deployment” (http:/ / www.informit.com/ articles/ article.
Retrieved June 23, 2009.  Best, Brian (2008). “Managing Your Loadset, Post-Deploy” (http:/ / www.mactech.com/ articles/ mactech/ Vol. 24/ 24. 01/ ManagingYourLoadset-Post-Deploy/ index.
MacTech 24 (1). .
Retrieved June 23, 2009.  Absolute Software (December 3, 2009). “Absolute Software Acquires LANrev product suite from Pole Position Software” (http:/ / www.absolute.com/ company/ pressroom/ news/ 2009/ 12/ lanrev).
Press release. .
Retrieved January 19, 2010.  [backPid (http:/ / www.lanrev.com/ company/ news/ single/ article/ absolute-software-unveils-new-cross-platform-it-asset-management-solution.
Html?tx_ttnews)=3&cHash=093df143d9 “Absolute Software Unveils New Cross-Platform IT Asset Management Solution”].
February 2, 2010. [backPid]=3&cHash=093df143d9. Absolute Manage  Doug Stanglin (February 18, 2010). “School district accused of spying on kids via laptop webcams” (http:/ / content.
Usatoday.com/ communities/ ondeadline/ post/ 2010/ 02/ school-district-accused-of-issuing-webcam-laptops-to-spy-on-students/ 1).
USA Today. .
Retrieved February 19, 2010.  “Initial LANrev System Findings” (http:/ / lmsd.org/ documents/ news/ 100503_l3_report.
Pdf), LMSD Redacted Forensic Analysis, L-3 Services – prepared for Ballard Spahr (LMSD’s counsel), May 2010.
Retrieved August 15, 2010.  School District Faces Lawsuit Over Webcam Spying Claims (http:/ / www.pcworld.com/ businesscenter/ article/ 190101/ school_district_faces_lawsuit_over_webcam_spying_claims.
Html)  Worden, Amy (February 22, 2010). “Laptop camera snapped away in one classroom | Philadelphia Inquirer | 02/22/2010” (http:/ / www.philly.com/ inquirer/ front_page/ 20100222_Laptop_camera_snapped_away_in_one_classroom.
Retrieved August 10, 2010.  Font size Print E-mail Share 13 Comments (February 18, 2010). “Suit: Schools Spied on Students Via Webcam” (http:/ / www.cbsnews.com/ stories/ 2010/ 02/ 18/ national/ main6220751.
CBS News. .
Retrieved August 10, 2010.  Claburn, Thomas. “FBI Investigating Web Spycam” (http:/ / www.informationweek.com/ news/ security/ privacy/ showArticle.
Retrieved August 10, 2010.  Tanfani, Joseph (February 23, 2010). “Rare ban in laptop lawsuit | Philadelphia Inquirer | 02/23/2010” (http:/ / www.philly.com/ philly/ news/ homepage/ 85021742.
Retrieved August 10, 2010.  “FBI, US Attorney Probing Penn.
School District’s Computer Spying” (http:/ / www.democracynow.org/ 2010/ 2/ 24/ headlines/ fbi_us_attorney_probing_penn_school_districts_computer_spying).
Retrieved August 10, 2010.  http:/ / www.computerworld.com/ s/ article/ 9160278/ Software_maker_blasts_vigilantism_in_Pa. _school_spying_case?taxonomyId=12  “LANrev to lose Theft Track feature following Pa.
School spying allegations | TR Dojo | TechRepublic.com” (http:/ / blogs.
Techrepublic.com.com/ itdojo/ ?p=1559).
February 23, 2010. .
Retrieved August 10, 2010.  http:/ / www.livetime.com/ itil-service-management/ service-manager/ configuration-management-cmdb/ 16 External links • Official homepage (http://www.absolute.com/en/products/absolute-manage/features.aspx) Accelops 17 Accelops AccelOps Type Industry Founded — 117 Goals Provide structure, coherence and cohesiveness.
Must enable business-to-security alignment.
Defined top-down beginning with business strategy.
Ensure that all models and implementations can be traced back to the business strategy, specific business requirements and key principles. • Provide abstraction so that complicating factors, such as geography and technology religion, can be removed and reinstated at different levels of detail only when required. • Establish a common “language” for information security within the organization • • • • Methodology The practice of enterprise information security architecture involves developing an architecture security framework to describe a series of “current”, “intermediate” and “target” reference architectures and applying them to align programs of change.
These frameworks detail the organizations, roles, entities and relationships that exist or should exist to perform a set of business processes.
This framework will provide a rigorous taxonomy and ontology that clearly identifies what processes a business performs and detailed information about how those processes are executed and secured.
The end product is a set of artifacts that describe in varying degrees of detail exactly what and how a business operates and what security controls are required.
These artifacts are often graphical.
Given these descriptions, whose levels of detail will vary according to affordability and other practical considerations, decision makers are provided the means to make informed decisions about where to invest resources, where to realign organizational goals and processes, and what policies and procedures will support core missions or business functions.
A strong enterprise information security architecture process helps to answer basic questions like: • Is the current architecture supporting and adding value to the security of the organization? • How might a security architecture be modified so that it adds more value to the organization? • Based on what we know about what the organization wants to accomplish in the future, will the current security architecture support or hinder that? Implementing enterprise information security architecture generally starts with documenting the organization’s strategy and other necessary details such as where and how it operates.
The process then cascades down to documenting discrete core competencies, business processes, and how the organization interacts with itself and with external parties such as customers, suppliers, and government entities.
Having documented the organization’s strategy and structure, the architecture process then flows down into the discrete information technology components such as: • • • • • • Organization charts, activities, and process flows of how the IT Organization operates Organization cycles, periods and timing Suppliers of technology hardware, software, and services Applications and software inventories and diagrams Interfaces between applications – that is: events, messages and data flows Intranet, Extranet, Internet, eCommerce, EDI links with parties within and outside of the organization • Data classifications, Databases and supporting data models • Hardware, platforms, hosting: servers, network components and security devices and where they are kept • Local and wide area networks, Internet connectivity diagrams Enterprise information security architecture Wherever possible, all of the above should be related explicitly to the organization’s strategy, goals, and operations.
The enterprise information security architecture will document the current state of the technical security components listed above, as well as an ideal-world desired future state (Reference Architecture) and finally a “Target” future state which is the result of engineering tradeoffs and compromises vs.
Essentially the result is a nested and interrelated set of models, usually managed and maintained with specialised software available on the market.
Such exhaustive mapping of IT dependencies has notable overlaps with both metadata in the general IT sense, and with the ITIL concept of the Configuration Management Database.
Maintaining the accuracy of such data can be a significant challenge.
Along with the models and diagrams goes a set of best practices aimed at securing adaptability, scalability, manageability etc.
These systems engineering best practices are not unique to enterprise information security architecture but are essential to its success nonetheless.
They involve such things as componentization, asynchronous communication between major components, standardization of key identifiers and so on.
Successful application of enterprise information security architecture requires appropriate positioning in the organization.
The analogy of city-planning is often invoked in this connection, and is instructive.
An intermediate outcome of an architecture process is a comprehensive inventory of business security strategy, business security processes, organizational charts, technical security inventories, system and interface diagrams, and network topologies, and the explicit relationships between them.
The inventories and diagrams are merely tools that support decision making.
But this is not sufficient.
It must be a living process.
The organization must design and implement a process that ensures continual movement from the current state to the future state.
The future state will generally be a combination of one or more • Closing gaps that are present between the current organization strategy and the ability of the IT security dimensions to support it • Closing gaps that are present between the desired future organization strategy and the ability of the security dimensions to support it • Necessary upgrades and replacements that must be made to the IT security architecture based on supplier viability, age and performance of hardware and software, capacity issues, known or anticipated regulatory requirements, and other issues not driven explicitly by the organization’s functional management. • On a regular basis, the current state and future state are redefined to account for evolution of the architecture, changes in organizational strategy, and purely external factors such as changes in technology and customer/vendor/government requirements. 118 Enterprise information security architecture 119 High-level security architecture framework Enterprise information security architecture frameworks is only a subset of enterprise architecture frameworks.
If we had to simplify the conceptual abstraction of enterprise information security architecture within a generic framework, the picture on the right would be acceptable as a high-level conceptual security architecture framework.
Other open enterprise architecture frameworks are: • The U.S.
Department of Defense (DoD) Architecture Framework (DoDAF) • Extended Enterprise Architecture Framework (E2AF) from the Institute For Enterprise Architecture Developments. • Federal Enterprise Architecture of the United States Government (FEA) • • • • • • • • • Capgemini’s Integrated Architecture Framework The UK Ministry of Defence (MOD) Architecture Framework (MODAF) NIH Enterprise Architecture Framework Open Security Architecture Information Assurance Enterprise Architectural Framework (IAEAF) SABSA framework and methodology Service-Oriented Modeling Framework (SOMF) The Open Group Architecture Framework (TOGAF) Zachman Framework Relationship to other IT disciplines Enterprise information security architecture is a key component of the information security technology governance process at any organization of significant size.
More and more companies are implementing a formal enterprise security architecture process to support the governance and management of IT.
However, as noted in the opening paragraph of this article it ideally relates more broadly to the practice of business optimization in that it addresses business security architecture, performance management and process security architecture as well.
Enterprise Information Security Architecture is also related to IT security portfolio management and metadata in the enterprise IT sense. Enterprise information security architecture — Information security 159 Incident response plans 1 to 3 paragraphs (non technical) that discuss: • • • • • • • • • • • Selecting team members Define roles, responsibilities and lines of authority Define a security incident Define a reportable incident Training Detection Classification Escalation Containment Eradication Documentation Change management Change management is a formal process for directing and controlling alterations to the information processing environment.
This includes alterations to desktop computers, the network, servers and software.
The objectives of change management are to reduce the risks posed by changes to the information processing environment and improve the stability and reliability of the processing environment as changes are made.
It is not the objective of change management to prevent or hinder necessary changes from being implemented.
Any change to the information processing environment introduces an element of risk.
Even apparently simple changes can have unexpected effects.
One of Managements many responsibilities is the management of risk.
Change management is a tool for managing the risks introduced by changes to the information processing environment.
Part of the change management process ensures that changes are not implemented at inopportune times when they may disrupt critical business processes or interfere with other changes being implemented.
Not every change needs to be managed.
Some kinds of changes are a part of the everyday routine of information processing and adhere to a predefined procedure, which reduces the overall level of risk to the processing environment.
Creating a new user account or deploying a new desktop computer are examples of changes that do not generally require change management.
However, relocating user file shares, or upgrading the Email server pose a much higher level of risk to the processing environment and are not a normal everyday activity.
The critical first steps in change management are (a) defining change (and communicating that definition) and (b) defining the scope of the change system.
Change management is usually overseen by a Change Review Board composed of representatives from key business areas, security, networking, systems administrators, Database administration, applications development, desktop support and the help desk.
The tasks of the Change Review Board can be facilitated with the use of automated work flow application.
The responsibility of the Change Review Board is to ensure the organizations documented change management procedures are followed.
The change management process is as follows: • Requested: Anyone can request a change.
The person making the change request may or may not be the same person that performs the analysis or implements the change.
When a request for change is received, it may undergo a preliminary review to determine if the requested change is compatible with the organizations business model and practices, and to determine the amount of resources needed to implement the change. • Approved: Management runs the business and controls the allocation of resources therefore, Management must approve requests for changes and assign a priority for every change.
Management might choose to reject a change request if the change is not compatible with the business model, industry standards or best practices.
Management might also choose to reject a change request if the change requires more resources than can be allocated for the Information security change. • Planned: Planning a change involves discovering the scope and impact of the proposed change; analyzing the complexity of the change; allocation of resources and, developing, testing and documenting both implementation and backout plans.
Need to define the criteria on which a decision to back out will be made. • Tested: Every change must be tested in a safe test environment, which closely reflects the actual production environment, before the change is applied to the production environment.
The backout plan must also be tested. • Scheduled: Part of the change review board’s responsibility is to assist in the scheduling of changes by reviewing the proposed implementation date for potential conflicts with other scheduled changes or critical business activities. • Communicated: Once a change has been scheduled it must be communicated.
The communication is to give others the opportunity to remind the change review board about other changes or critical business activities that might have been overlooked when scheduling the change.
The communication also serves to make the Help Desk and users aware that a change is about to occur.
Another responsibility of the change review board is to ensure that scheduled changes have been properly communicated to those who will be affected by the change or otherwise have an interest in the change. • Implemented: At the appointed date and time, the changes must be implemented.
Part of the planning process was to develop an implementation plan, testing plan and, a back out plan.
If the implementation of the change should fail or, the post implementation testing fails or, other “drop dead” criteria have been met, the back out plan should be implemented. • Documented: All changes must be documented.
The documentation includes the initial request for change, its approval, the priority assigned to it, the implementation, testing and back out plans, the results of the change review board critique, the date/time the change was implemented, who implemented it, and whether the change was implemented successfully, failed or postponed. • Post change review: The change review board should hold a post implementation review of changes.
It is particularly important to review failed and backed out changes.
The review board should try to understand the problems that were encountered, and look for areas for improvement.
Change management procedures that are simple to follow and easy to use can greatly reduce the overall risks created when changes are made to the information processing environment.
Good change management procedures improve the over all quality and success of changes as they are implemented.
This is accomplished through planning, peer review, documentation and communication.
ISO/IEC 20000, The Visible OPS Handbook: Implementing ITIL in 4 Practical and Auditable Steps  (Full book summary ), and Information Technology Infrastructure Library all provide valuable guidance on implementing an efficient and effective change management program.
Information security 160 Business continuity Business continuity is the mechanism by which an organization continues to operate its critical business units, during planned or unplanned disruptions that affect normal business operations, by invoking planned and managed procedures.
Unlike what most people think business continuity is not necessarily an IT system or process, simply because it is about the business.
Today disasters or disruptions to business are a reality.
Whether the disaster is natural or man-made, it affects normal life and so business.
So why is planning so important? Let us face reality that “all businesses recover”, whether they planned for recovery or not, simply because business is about earning money for survival.
The planning is merely getting better prepared to face it, knowing fully well that the best plans may fail.
Planning helps to reduce cost of recovery, operational overheads and most importantly sail through some smaller ones Information security effortlessly.
For businesses to create effective plans they need to focus upon the following key questions.
Most of these are common knowledge, and anyone can do a BCP. 1.
Should a disaster strike, what are the first few things that I should do? Should I call people to find if they are OK or call up the bank to figure out my money is safe? This is Emergencey Response.
Emergency Response services help take the first hit when the disaster strikes and if the disaster is serious enough the Emergency Response teams need to quickly get a Crisis Management team in place. 2.
What parts of my business should I recover first? The one that brings me most money or the one where I spend the most, or the one that will ensure I shall be able to get sustained future growth? The identified sections are the critical business units.
There is no magic bullet here, no one answer satisfies all.
Businesses need to find answers that meet business requirements. 3.
How soon should I target to recover my critical business units? In BCP technical jargon this is called Recovery Time Objective, or RTO.
This objective will define what costs the business will need to spend to recover from a disruption.
For example, it is cheaper to recover a business in 1 day than in 1 hour. 4.
What all do I need to recover the business? IT, machinery, records…food, water, people…So many aspects to dwell upon.
The cost factor becomes clearer now…Business leaders need to drive business continuity.
My IT manager spent $200000 last month and created a DRP (Disaster Recovery Plan), whatever happened to that? a DRP is about continuing an IT system, and is one of the sections of a comprehensive Business Continuity Plan.
Look below for more on this. 5.
And where do I recover my business from…
Will the business center give me space to work, or would it be flooded by many people queuing up for the same reasons that I am. 6.
But once I do recover from the disaster and work in reduced production capacity, since my main operational sites are unavailable, how long can this go on.
How long can I do without my original sites, systems, people? this defines the amount of business resilience a business may have. 7.
Now that I know how to recover my business.
How do I make sure my plan works? Most BCP pundits would recommend testing the plan at least once a year, reviewing it for adequacy and rewriting or updating the plans either annually or when businesses change. 161 Disaster recovery planning While a business continuity plan (BCP) takes a broad approach to dealing with organizational-wide effects of a disaster, a disaster recovery plan (DRP), which is a subset of the business continuity plan, is instead focused on taking the necessary steps to resume normal business operations as quickly as possible.
A disaster recovery plan is executed immediately after the disaster occurs and details what steps are to be taken in order to recover critical information technology infrastructure. Laws and regulations Below is a partial listing of European, United Kingdom, Canadian and USA governmental laws and regulations that have, or will have, a significant effect on data processing and information security.
Important industry sector regulations have also been included when they have a significant impact on information security. • UK Data Protection Act 1998 makes new provisions for the regulation of the processing of information relating to individuals, including the obtaining, holding, use or disclosure of such information.
The European Union Data Protection Directive (EUDPD) requires that all EU member must adopt national regulations to standardize the protection of data privacy for citizens throughout the EU. • The Computer Misuse Act 1990 is an Act of the UK Parliament making computer crime (EG cracking sometimes incorrectly referred to as hacking) a criminal offence.
The Act has become a model upon which several other countries including Canada and the Republic of Ireland have drawn inspiration when subsequently — Heavily networked IT systems typically characterize information technology in government and business these days.
As a rule, therefore, it is advantageous to consider the entire IT system and not just individual systems within the scope of an IT security analysis and concept.
To be able to manage this task, it makes sense to logically partition the entire IT system into parts and to separately consider each part or even an IT network.
Detailed documentation about its structure is prerequisite for the use of the IT Baseline Protection Catalogs on an IT network.
This can be achieved, for example, via the IT structure analysis described above.
The IT Baseline Protection Catalog?s’ components must ultimately be mapped onto the components of the IT network in question in a modelling step. Baseline security check The baseline security check is an organisational instrument offering a quick overview of the prevailing IT security level.
With the help of interviews, the status quo of an existing IT network (as modelled by IT baseline protection) relative to the number of security measures implemented from the IT Baseline Protection Catalogs are investigated.
The result is a catalog in which the implementation status “dispensable”, “yes”, “partly”, or “no” is entered for each relevant measure.
By identifying not yet, or only partially, implemented measures, improvement options for the security of the information technology in question are highlighted.
The baseline security check gives information about measures, which are still missing (nominal vs.
From this follows what remains to be done to achieve baseline protection through security.
Not all measures suggested by this baseline check need to be implemented.
Peculiarities are to be taken into account! It could be that several more or less unimportant applications are running on a server, which have lesser protection needs.
In their totality, however, these applications are to be provided with a higher level of protection.
This is called the (cumulation effect).
The applications running on a server determine its need for protection.
In this connection, it is to be noted that several IT applications can run on an IT system.
When this occurs, the application with the greatest need for protection determines the IT system?s protection category.
Conversely, it is conceivable that an IT application with great protection needs does not automatically transfer this to the IT system.
This may happen because the IT system is configured redundantly, or because only an inconsequential part is running on it.
This is called the (distribution effect).
This is the case, for example, with clusters.
The baseline security check maps baseline protection measures.
This level suffices for low to medium protection needs.
This comprises about 80 % of all IT systems according to FSI estimates.
For systems with high to very high protection needs, risk analysis based information security concepts, like for example ISO 27001, are usually used. IT baseline protection 186 IT Baseline Protection Catalog and standards During its 2005 restructuring and expansion of the IT Baseline Protection Catalogs, the FSI separated methodology from the IT Baseline Protection Catalog.
The BSI 100-1, BSI 100-2, and BSI 100-3 standards contain information about construction of an information security management system (ISMS), the methodology or basic protection approach, and the creation of a security analysis for elevated and very elevated protection needs building on a completed baseline protection investigation.
BSI 100-4, the “Emergency management” standard, is currently in preparation.
It contains elements from BS 25999, ITIL Service Continuity Management combined with the relevant IT Baseline Protection Catalog components, and essential aspects for appropriate Business Continuity Management (BCM).
Implementing these standards renders certification is possible pursuant to BS 25999-2.
The FSI has submitted the FSI 100-4 standards design for online commentary under. The FSI brings its standards into line with international norms this way. Literature • FSI:IT Baseline Protection Guidelines (pdf, 420 kB)  • FSI: IT Baseline Protection Cataloge 2007  (pdf) • FSI: FSI IT Security Management and IT Baseline Protection Standards  • Frederik Humpert: IT-Grundschutz umsetzen mit GSTOOL.
Anleitungen und Praxistipps für den erfolgreichen Einsatz des BSI-Standards, Carl Hanser Verlag München, 2005.  (ISBN 3-446-22984-1) • Norbert Pohlmann, Hartmut Blumberg: Der IT-Sicherheitsleitfaden.
Das Pflichtenheft zur Implementierung von IT-Sicherheitsstandards im Unternehmen, ISBN 3-8266-0940-9 References  http:/ / www.cisco.com/ en/ US/ docs/ solutions/ Enterprise/ Security/ Baseline_Security/ securebasebook.
Html  http:/ / www.nortel.com/ corporate/ news/ collateral/ ntj3_baseline_04.
Pdf  “Department Baseline Security Policy and End User Agreement” (http:/ / www.ag.
Edu/ biochem/ department/ Documents/ Baseline Security Policy and End User Agreement.
Purdue University. .
Retrieved 17 December 2009.  “D16 Baseline Security Requirements for Information Systems” (http:/ / www.kent.
Uk/ About Kent Police/ policies/ d/ d16.
Kent Police. .
Retrieved 17 December 2009.  “Mapping ISO 27000 to baseline security” (https:/ / www.bsi.
De/ cae/ servlet/ contentblob/ 471598/ publicationFile/ 31081/ Vergleich_ISO27001_GS_pdf.
Retrieved 17 December 2009.  Entwurf BSI 100-4 (http:/ / www.bsi.
De/ literat/ bsi_standard/ bsi-standard_100-4_v090.
Pdf) (pdf)  http:/ / www.bsi.
De/ gshb/ Leitfaden/ GS-Leitfaden.
Pdf  http:/ / www.bsi.
De/ gshb/ deutsch/ download/ it-grundschutz-kataloge_2006_de.
Pdf  http:/ / www.bsi.
De/ literat/ bsi_standard/ index.
Htm  http:/ / www.humpert-partner.
De/ conpresso/ _rubric/ index.
Php?rubric=6 External links • • • • Federal Office for Security in Information Technology (http://www.bsi.bund.de/english/index.htm) IT Security Yellow Pages (http://www.branchenbuch-it-sicherheit.de/) IT Baseline protection tools (http://www.bsi.bund.de/english/gstool/index.htm) Open Security Architecture- Controls and patterns to secure IT systems (http://www.opensecurityarchitecture.org) IT Baseline Protection Catalogs 187 IT Baseline Protection Catalogs — Headquarters Staines, Middlesex Key people Products Mark Fowle (CEO), Simon Hansford (VP Service Strategy and Marketing, Paul Howard (CFO), Paul Morris (VP, Client Service Delivery) Critical Application Hosting IT Infrastructure & Operations SaaS Provider for ISV’s 200+ www.attenda.net Employees Website Attenda is a UK-based specialist managed services solutions provider.
It was founded in 1997 by co-founders Mark Fowle (CEO) and Simon Hansford (VP Service Strategy and Marketing), David Godwin and Neal Gandhi with the aim of hosting and maintaining its clients’ IT systems so they could focus on the strategic projects rather than day-to-day operation of their IT infrastructure. Attenda is registered in the UK as Attenda Limited currently employing over 200 people in its offices in Staines, Middlesex, with three UK data centres. Business model Attenda’s business model is based upon Attenda M.O. – a shared operations platform that integrates people, process and technology to deliver high availability service levels at a cost that is amortised across Attenda’s client base. Critical application hosting and cloud services Attenda’s Critical Application Hosting Service covers Enterprise Applications, Web Applications and Messaging & Collaboration systems.
Furthermore, Attenda’s IT infrastructure and Operations extend from the computer room and to complete adhering data centre to ITIL ISO 20000 certified IT Service Management System standards.
Attenda are also single and multi-tenanted SaaS providers to independent software vendors (ISVs) as well as Cloud (Infrastructure as a Service) platform providers.
Attenda’s clients include: BIW Technologies, bmi, Microsoft, the National Health Service, St James’s Place and Travelodge. Attenda 425 Acquisitions Attenda acquired Manchester-based IFL in May 2007, a company which provided co-location services to SME businesses in northwest England. References     http:/ / h71028.
Hp.com/ ERC/ downloads/ Attenda.
Pdf http:/ / www.attenda.net/ at-a-glance.
Asp http:/ / www.theoutsourceblog.com/ 2010/ 01/ bmi-renews-and-extends-its-outsourcing-contract-with-attenda-to-adopt-cloud-services/ http:/ / www.ifl.
Html Further reading • Best Companies Guide http://www.bestcompaniesguide.co.uk/company_profile.
Aspx?CompanySurveyID=44021 • Attenda announces ‘next-gen’ cloud platform http://www.sns-uk.co.uk/news_full.php?id=16015& title=Attenda-announces-%91next-gen%92-cloud-platform • Bmi hosts at Attenda http://www.biwtech.com/cp_root/h/Media_Centre/BIW_hosts_at_Attenda/295/ • Interview with Simon Hansford of Attenda at IP Expo http://www.thestoragearchitect.com/2010/10/21/ interview-with-simon-hansford-of-attenda-at-ip-expo/ • Managing RFU’s Business Critical Websites http://www.rfu.com/AboutTheRFU/Sponsors/Attenda • SaaS Directory http://www.saasdir.co.uk/search/profile.aspx?spid=16401 • SAP UK Hosting Partner http://www.sap.com/uk/partners/categories/hosting/attenda.epx • Real Wire Press Release http://www.realwire.com/releases/ Attenda-is-first-Local-SAP-Partner-to-gain-new-SAP-Cloud-Accreditation • Microsoft Pinpoint http://pinpoint.microsoft.com/en-GB/PartnerDetails.aspx?PartnerId=4295486303 • Attenda Wins EMEA and Global VMware Award http://www.tophosts.com/articles/009810.html • Reflective Solutions http://www.stresstester.com/partners_strategic.php • Attenda underpins the infrastructure for Microsoft HealthVault UK launch http://www.prohealthservicezone.com/Customisation/News/IT_and_Communications_in_Healthcare/IT_services/ Attenda_underpins_the_infrastructure_for_Microsoft_HealthVault_UK_launch.asp • Sourcewire: Sun Shines on Attenda http://www.sourcewire.com/releases/rel_display.php?relid=19537 • Attenda and Bluefin Solutions partner to deliver Affinity for SAP customers http://www.bluefinsolutions.com/ news/7610/ • Managing Enterprise Agility – Attenda Whitepaper http://www.computerweekly.com/Articles/2009/11/19/ 237712/Managing-Enterprise-Agility-Attenda-Whitepaper.htm • Attenda Rolls Out Cloud-Agile Assured and Secured Services http://www.3par.com/news_events/20091007.
Html • Attenda manages Regus Global Data Centres Shttp://www.best-web-hosting-services.com/hostingservices/2009/05/webhostingnews/568 • Fleurop AG Signs with Attenda for Managed Web Hosting http://www.tophosts.com/articles/010733.html • Food Standards Agency Embraces Attenda Cloud Services for its Business Critical Websites http://www.przoom.com/news/82422/ Attenda —  KVM Guest Support Status (http:/ / www.linux-kvm.org/ page/ Guest_Support_Status)  Gartner report “Five Refining Attributes of Public and Private Cloud Computing”. (http:/ / www.gartner.com/ DisplayDocument?doc_cd=167182& ref=g_fromdoc)  Swedish Press Release: “Med sikte mot molnen när Witsbits blir nytt bolag på Chalmers Innovation” (http:/ / www.mynewsdesk.com/ se/ pressroom/ chalmers_innovation/ pressrelease/ view/ med-sikte-mot-molnen-naer-witsbits-blir-nytt-bolag-paa-chalmers-innovation-501523) Granular Configuration Automation 509 Granular Configuration Automation Granular Configuration Automation (GCA) is a specialized area in the field of Configuration Management which focuses on visibility and control of an IT Environment’s configuration and bill-of-material at the most granular level.
This framework focuses on improving the stability of IT environments by analyzing granular information.
It responds to the requirement to determine a threat level of an environment risk, and to allow IT organizations to focus on those risks with the highest impact on performance. Granular Configuration Automation combines two major trends in configuration management: the move to collect detailed and comprehensive environment information and the growing utilization of automation tools. Driving factors For IT Personnel, IT systems have grown in complexity  , supporting a wider and growing range of technologies and platforms.
Application release schedules are accelerating, requiring greater attention to more information. The average Global 2000 firm has more than a thousand applications that their IT organization deploys and supports. New technology platforms like cloud and virtualization offer benefits to IT with less server space, and energy savings, but complicate configuration management from issues like sprawl. The need to ensure high availability and consistent delivery of business services have led many companies to develop automated configuration, change and release management processes. Downtime and system outages undermine the environments that IT professionals manage.
Despite advances in infrastructure robustness, occasional hardware, software and database downtime occurs.
Dunn & Bradstreet reports that 49% of Fortune 500 companies experience at least 1.6 hours of downtime per week, translating into more than 80 hours annually. The growing costs of downtime has provided IT organizations with ample evidence for the need to improve processes.
A conservative estimate from Gartner pegs the hourly cost of downtime for computer networks at $42,000, so a company that suffers from worse than average downtime of 175 hours a year can lose more than $7 million per year. The demands and complexity of Incident Investigation have put further strain on IT professionals, where their current experience cannot address incidents to the scale of environments in their organizations.
The incident may be captured, monitored and the results reported using standardized forms, most of the time even using a help-desk or trouble tickets software system to automate it and sometimes even a formal process methodology like ITIL.
But the core activity is still handled by a technical specialist “nosing around” the system trying to “figure out” what is wrong based on previous experience and personal expertise. Potential applications • Release validation — validating releases and mitigating the risk of production outages • Incident prevention — identifying and alerting of undesired changes; hence avoiding costly environment incidents • Incident investigation — pinpointing the root-cause of the incident and significantly cutting the time and effort spent on investigation • Disaster Recovery Verification — the accurate validation of disaster recovery plans and eliminating surprises at the most vulnerable times • Security — identifying deviations from security policy and best-practices • Compliance — discovering non-compliant situations and providing a detailed audit-trail Granular Configuration Automation 510 References  Risk Management Broken in Many Organizations, says Gartner, Government Technology, ” (http:/ / www.govtech.com/ gt/ 324452)”  Ken Jackson, The Dawning of the IT Automation Era (http:/ / www.itbusinessedge.com/ cm/ community/ features/ guestopinions/ blog/ the-dawning-of-the-it-automation-era/ ?cs=37375), IT Business Edge.  Bob Violino, Reducing IT Complexity (http:/ / www.smartenterprisemag.com/ articles/ 2007winter/ coverstory.
Jhtml), Smart Enterprise.  Change, Configuration, and Release: What’s Really Driving Top Performance? (http:/ / www.itpi.org/ files/ ITPI_Executive_Snapshot_CCR Study.
Pdf), IT Process Institute.  Improving Application Quality by Controlling Application Infrastructure (http:/ / www.cmcrossroads.com/ cm-journal-articles/ 6957-improving-application-quality-by-controlling-application-infrastructure), Configuration Management Crossroads.  Cameron Sturdevant, How to Tame Virtualization Sprawl (http:/ / www.eweek.com/ c/ a/ IT-Infrastructure/ How-to-Tame-Virtualization-Sprawl/ ), eweek.  Challenges and Priorities for Fortune 1000 Companies (http:/ / pridham.
WordPress.com/ 2007/ 06/ mvalent_survey_results_2007.
Pdf).  How Much Does Downtime Really Cost? (http:/ / www.information-management.com/ infodirect/ 2009_133/ downtime_cost-10015855-1.
Html), Information Management.  How to quantify downtime (http:/ / www.networkworld.com/ careers/ 2004/ 0105man.
Html), NetworkWorld.  Root Cause Analysis for IT Incidents Investigation (http:/ / hosteddocs.
Pdf), IT Toolbox. High performance cloud computing High Performance Cloud Computing (HPC2) is a term coined by Robert L.
Clay of Sandia National Labs to refer to a body of work focused on providing a scalable application runtime environment based on core notions from Cloud Computing (specifically, extreme hardware fault tolerance through software) applied for use on high performance machine architectures (high cross-section bandwidth).
Work on HPC2 emerged as a response to the perceived breakdown of several core assumptions in traditional high performance computing (HPC) at extreme scale (exascale and beyond).
These assumptions include: 1.
That compute nodes persist for the duration of a job. 2.
That the MPI programming model will scale to arbitrary size. 3.
That we can build hardware that is sufficiently reliable (to not require fault oblivious software). 4.
That capability machines are fundamentally different than capacity machines.
These assertions were meant both for rhetorical purposes & as strictly technical observations.
An alternative set of assumptions was offered to replace this set, based on the perception that the first three of the above assumptions were failing as machines and applications scaled.
This alternate set of assumptions include: 1.
That compute nodes will fail during execution of job, and so will other hardware components. 2.
That the MPI cooperative computing model will not scale far enough. 3.
That sufficiently reliable hardware is too expensive and impractical at scale. 4.
That capability machines of the future may be similar to capacity machines. (This fourth assertion was posed as a question.) The core notions driving HPC2 are oriented around building an application runtime system that can scale to arbitrary size, and that is not specific to any one hardware system design or configuration. Hosted desktop
Read more about ITIL : com itil service management service manager configuration management cmdb 16….: