Friday, September 6, 2019

Humor in the Workplace The Weighing the Pros and Cons Essay Example for Free

Humor in the Workplace The Weighing the Pros and Cons Essay Weighing the Pros Con sSubmitted to:Wilma ThomasonPrinciples of Management InstructorPrepared by:Successful Future ManagementKenya HardenSunday, June 28, 2009Humor in the WorkplaceWeighing the Pros ConsI.IntroductionA.Evaluating the problems in the workplace1.Poor productivity2.Lack of creativityII.Benefits of incorporating humor at workA.Improves healthB.Reduces stressIII.How stress effects companiesA.Increases possibility of mistakesB.Loss of money IV.ConclusionA.Companies encourage humor1.Set ground rules2.Know what is allowed Increase your companys earning potential by paying your employees to watch comedy shows and play games for thirty minutes a day! Reduce employee sick leave by establishing company playtime. These statements may sound asinine, but companies are discovering the benefits of incorporating enjoyment and laughter in the office. People spend at least forty hours a week in the workplace and about five or more of those hours are spent trying to think of a new idea, or trying to complete a project because they cant focus. Some employees are just drained and their minds are bombarded with thoughts of what they need to do at home. As a way to keep their employees focused and boost productivity, many companies are taking heed to the saying: Laughter is the best medicine. It is becoming a fast growing trend in businesses to find ways to allow their employees a period to loosen up and laugh. Laughter has been found to keep a person healthy and has several benefits; it lowers tension, causes one to relax, boosts the immune system and can even temporarily relieve pain (www.HelpGuide.org). Laugh and the world laughs with you, this seems true even in the workplace. It is important that various methods are available to help employees eliminate stress so that their work is not affected. Stress can have a very negative effect on employee performance and causes burn-out. A person that is stressed-out is often distracted and makes mistakes. This can cause a major financial loss to the company, as well as the employee, if injury occurs or a major project is delayed. Some companies consider that having fun or joking around on the job as  goofing off. Management in these types of companies feels that employees do not take their jobs seriously. Some employees are even labeled as adolescent, unprofessional and unproductive. This type of atmosphere creates tension and increases the risks of work related health problems in employees. It also costs the company money due to excessive downtime due to the lack of creativity. However, many companies have been encouraging employees to have fun at work and have even set-up special rooms for their employees to take breaks equipped with televisions and games. By allowing employees to enjoy themselves at work, companies are building better relationships and strengthening communication between management and employees. So why not put the low cost, (if not free) remedy of humor in place? Laughter is contagious. It can make the workplace more pleasurable by easing tension, reduce risk of employee burnout, improve productivity and creativity. The key to the success of implementing humor in the workplace is to make sure that no one oversteps boundaries or cause injury to anyone. It is crucial that no one is offended by joking; steer clear of religious, political, or personal topics. Everyone must be mindful of what is allowed and what is taboo. It is recommended that all business incorporate humor in the daily routines of their employees. The physical and mental health of employees is reflected in his or her performance. Allowing fun and relaxation on the job as a release will only improve productivity. The benefits of humor well out-weigh the risks. Works Cited Emotional Intelligence Central. Laughter is the Best Medicine: The HealthBenefits of Humor HelpGuide.org. 2001-2009. 24 June 2009 Levy. S. Working in Dilberts World. Newsweek 12 Aug 199623 June 2009 McGhee, P. Health, Healing on the Immune System: Humor as Survival Training. 23 June 2009University of Missouri-Columbia. Light Humor in the Workplace is a Good Thing,Review shows. ScienceDaily 1 November 2007. 24 June 2009http://www.sciencedaily.com/releases/2007/10/071031130917.htmWood, Robert E., Beckmann, Nadin and Pavlakis, Fiona. Humor in Organizations: NoLaughing Matter Research Companion to the Dysfunctional Workplace. Ed. Langan-Fox, Janice, Cooper, Cary L. and Klimoski, Richaard J. Cheltenham,Glos, UK: Northhampton, MA: Edward Elgar, 2007 216-231

Decrease Clabsi in the Nicu Essay Example for Free

Decrease Clabsi in the Nicu Essay The purpose of this initiative is to decrease and/or eliminate central line-associated bloodstream infections (CLABSI) in the neonatal intensive care unit (NICU) at Aurora Bay Care Medical Center. Hospital acquired infections, including CLABSI, is a major cause of mortality, prolonged hospitalization, and extra costs for NICU patients (Stevens Schulman, 2012). The goal of this initiative is to decrease CLABSI by 75% by reducing the number of days lines are in and standardizing the insertion process and line maintenance. CLABSI is preventable and increases the risk of neurodevelopmental impairment in very low birth weight infants. It is estimated that up to 70% of hospital acquired infections are caused by CLABSI in preterm infants (Stevens Schulman, 2012). It is also estimated that 41,000 CLABSI occur in United States hospitals every year (Centers for Disease Control and Prevention [CDC], 2012). It is easily preventable by managing the central line properly. Insertion of the central line must be done completely sterile and rigorous care needs to be done with catheter care. The catheter hub is the main culprit of infections so that needs to be a large part of the initiative (Stevens Schulman, 2012). The participants in this initiative include neonatologists, neonatal nurse practitioners, nurses, infection control personnel, the NICU supervisor, and the NICU manager. Together, they will form a core team of 10 people with at least one person from each level of care. The team will analyze the NICU practices and establish practice based on evidenced based practice. The team will investigate the cause of each infection and agree on changes that need to be made. They will meet every other week until the new practices have been established, at which time they can determine how often they need to meet. Each member must play an active role in the investigation process as well as the agreed-upon changes. There are multiple benefits to the proposed initiative. Hospital acquired infections will be reduced which means there will be a reduction in harm to the patients. This will mean a  major cost savings to Aurora Bay Care Medical Center because there will not be that additional cost of treating a preventable infection. Staff will be collaborating together for the greater good of the NICU. The best practices that come out of the initiative can be shared with other NICUs to help decrease CLABSI across all hospitals. The cost of the initiative will be minimal compared to the cost of treating a CLABSI. On top of the morbidity and mortality resulting from the infection, the financial costs are significant. Many of these costs are no longer covered by insurance because the infection was a result of the hospital stay. The CDC recently estimated the cost of a CLABSI to be $29,156 per case with an estimated mortality of 12-25% (Horan, 2010). The largest cost that will incur because of this initiative will be staffing costs. The team of approximately 10 people will get paid for their time on the team that will meet every other week for an undetermined amount of time, not to exceed 3 months. Any time spent on research will need to be reimbursed. The entire staff will need to be trained on the new processes before they are rolled out. They will be required to do hands on training as well as complete a competency designed by the team. There will not be an increase in the cost of supplies, as the NICU has all the supplies necessary at this time. If it is determined they need different supplies, it will be addressed at that time. Data definitions and the procedures used for collection will be determined by the team at the first meeting. The data will be tracked from the first day the team meets throughout the course of the quality initiative. The original goal will be to decrease CLABSI by 75% in the first year. A detailed analysis must be performed on the processes that were used to implement and maintain evidence-based practices. Each infection must have an investigational analysis completed. The data that is collected will be completely confidential so as to not break the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rules. In conclusion, the purpose of this initiative is to reduce CLABSI by at least 75% in the NICU at Aurora Bay Care Medical Center. This will be accomplished through training and education to the doctors, nurses, any staff that comes into contact with the infants, and the parents. This is a win-win for both the patients as well as the hospital because it will reduce morbidities and mortalities caused by the preventable infection as well as reduce costs significantly for the  hospital. References Centers for Disease Control and Prevention. (2012). Central line-associated bloodstream infection (CLABSI) event. Retrieved from http://www.cdc.gov/nhsn/pdfs/pscmanual/4psc_clabscurrent.pdf Horan, T. C. (2010). Central line-associated bloodstream infection (CLABSI) criteria and case studies. Retrieved from http://www.azdhs.gov/phs/oids/hai/documents/NHSN_Workshop1_CLABSI_Criteria_Studies.pdf Stevens, T. P., Schulman, J. (2012). Evidence-based approach to preventing central line-associated blood stream infection in the NICU. Acta Paeditrica, 11-16. doi:10.1111/j.1651-2227.2011.02547.x

Thursday, September 5, 2019

Development of Peer-to-Peer Network System

Development of Peer-to-Peer Network System Procedures which we are followed to success this project. Task 01 Familiarizing with the equipments preparing an action plan. Task 02 Prepare the work area. Task 03 Fixed the hardware equipments and assemble three PCs. Task 04 Install NICs for each and every PC. Task 05 Cabling three computers and configure the peer to peer network with using hub or switch. Task 06 Install Windows operating system to each and every PC. Task 07 Install and configure the printer on one of the PCs. Task 08 Share printer with other PCs in the LAN. Task 09 Establish one shared folder Task 10 Create a test document on one of the PCs and copy the files to each of the other PCs in network. Task 11 Test the printer by getting the test document from each of the networked PCs. Time allocation for the tasks. Task No. Time allocation Task 01 1 hour Task 02 30 minutes Task 03 1  ½ hour Task 04 1  ½ hour Task 05 1  ½ hour Task 06 3 hour Task 07 15 minutes Task 08 15 minutes Task 09 15 minutes Task 10 10 minutes Task 11 05 minutes Total time allocation 10 hours Physical structure of proposed Peer to Peer network system. In peer to peer network there are no dedicated servers or hierarchy among the computers. The user must take the decisions about who access this network. Processors In 1945, the idea of the first computer with a processing unit capable of performing different tasks was published by John von Neumann. The computer was called the EDVAC and was finished in 1949. These first primitive computer processors, such as the EDVAC and the Harvard Mark I, were incredibly bulky and large. Hundreds of CPUs were built into the machine to perform the computers tasks. Starting in the 1950s, the transistor was introduced for the CPU. This was a vital improvement because they helped remove much of the bulky material and wiring and allowed for more intricate and reliable CPUs. The 1960s and 1970s brought about the advent of microprocessors. These were very small, as the length would usually be recorded in nanometers, and were much more powerful. Microprocessors helped this technology become much more available to the public due to their size and affordability. Eventually, companies like Intel and IBM helped alter microprocessor technology into what we see today. The computer processor has evolved from a big bulky contraption to a minuscule chip. Computer processors are responsible for four basic operations. Their first job is to fetch the information from a memory source. Subsequently, the CPU is to decode the information to make it usable for the device in question. The third step is the execution of the information, which is when the CPU acts upon the information it has received. The fourth and final step is the write back. In this step, the CPU makes a report of the activity and stores it in a log. Two companies are responsible for a vast majority of CPUs sold all around the world. Intel Corporation is the largest CPU manufacturer in the world and is the maker of a majority of the CPUs found in personal computers. Advanced Micro Devices, Inc., known as AMD, has in recent years been the main competitor for Intel in the CPU industry. The CPU has greatly helped the world progress into the digital age. It has allowed a number of computers and other machines to be produced that are very important and essential to our global society. For example, many of the medical advances made today are a direct result of the ability of computer processors. As CPUs improve, the devices they are used in will also improve and their significance will become even greater. VGA The term Video Graphics Array (VGA) refers specifically to the display hardware first introduced with the IBM PS/2 line of computers in 1987,[1] but through its widespread adoption has also come to mean either an analogue computer display standard, the 15-pin D-sub miniature VGA connector or the 640Ãâ€"480 resolution itself. While this resolution has been superseded in the personal computer market, it is becoming a popular resolution on mobile devices. Video Graphics Array (VGA) was the last graphical standard introduced by IBM that the majority of PC clone manufacturers conformed to, making it today (as of 2009) the lowest common denominator that all PC graphics hardware supports, before a device-specific driver is loaded into the computer. For example, the MS-Windows splash screen appears while the machine is still operating in VGA mode, which is the reason that this screen always appears in reduced resolution and colour depth. VGA was officially superseded by IBMs XGA standard, but in reality it was superseded by numerous slightly different extensions to VGA made by clone manufacturers that came to be known collectively as Super VGA. VGA is referred to as an array instead of an adapter because it was implemented from the start as a single chip (an ASIC), replacing the Motorola 6845 and dozens of discrete logic chips that covered the full-length ISA boards of the MDA, CGA, and EGA. Its single-chip implementation also allowed the VGA to be placed directly on a PCs motherboard with a minimum of difficulty (it only required video memory, timing crystals and an external RAMDAC), and the first IBM PS/2 models were equipped with VGA on the motherboard. RAM Random-access memory (usually known by its acronym, RAM) is a form of computer data storage. Today, it takes the form of integrated circuits that allow stored data to be accessed in any order (i.e., at random). The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data. By contrast, storage devices such as tapes, magnetic discs and optical discs rely on the physical movement of the recording medium or a reading head. In these devices, the movement takes longer than data transfer, and the retrieval time varies based on the physical location of the next item. The word RAM is often associated with volatile types of memory (such as DRAM memory modules), where the information is lost after the power is switched off. Many other types of memory are RAM, too, including most types of ROM and flash memory called NOR-Flash. An early type of widespread writable random-access memory was the magnetic core memory, developed from 1949 to 1952, and subsequently used in most computers up until the development of the static and dynamic integrated RAM circuits in the late 1960s and early 1970s. Before this, computers used relays, delay line memory, or various kinds of vacuum tube arrangements to implement main memory functions (i.e., hundreds or thousands of bits); some of which were random access, some not. Latches built out of vacuum tube triodes, and later, out of discrete transistors, were used for smaller and faster memories such as registers and random-access register banks. Modern types of writable RAM generally store a bit of data in either the state of a flip-flop, as in SRAM (static RAM), or as a charge in a capacitor (or transistor gate), as in DRAM (dynamic RAM), EPROM, EEPROM and Flash. Some types have circuitry to detect and/or correct random faults called memory errors in the stored data, using pa rity bits or error correction codes. RAM of the read-only type, ROM, instead uses a metal mask to permanently enable/disable selected transistors, instead of storing a charge in them. As both SRAM and DRAM are volatile, other forms of computer storage, such as disks and magnetic tapes, have been used as persistent storage in traditional computers. Many newer products instead rely on flash memory to maintain data when not in use, such as PDAs or small music players. Certain personal computers, such as many rugged computers and net books, have also replaced magnetic disks with flash drives. With flash memory, only the NOR type is capable of true random access, allowing direct code execution, and is therefore often used instead of ROM; the lower cost NAND type is commonly used for bulk storage in memory cards and solid-state drives. Similar to a microprocessor, a memory chip is an integrated circuit (IC) made of millions of transistors and capacitors. In the most common form of computer memory, dynamic random access memory (DRAM), a transistor and a capacitor are paired to create a memory cell, which represents a single bit of data. The transistor acts as a switch that lets the control circuitry on the memory chip read the capacitor or change its state. Types of RAM Top L-R, DDR2 with heat-spreader, DDR2 without heat-spreader, Laptop DDR2, DDR, Laptop DDR 1 Megabit chip one of the last models developed by VEB Carl Zeiss Jena in 1989 Many computer systems have a memory hierarchy consisting of CPU registers, on-die SRAM caches, external caches, DRAM, paging systems, and virtual memory or swap space on a hard drive. This entire pool of memory may be referred to as RAM by many developers, even though the various subsystems can have very different access times, violating the original concept behind the random access term in RAM. Even within a hierarchy level such as DRAM, the specific row, column, bank, rank, channel, or interleave organization of the components make the access time variable, although not to the extent that rotating storage media or a tape is variable. The overall goal of using a memory hierarchy is to obtain the higher possible average access performance while minimizing the total cost of entire memory system. (Generally, the memory hierarchy follows the access time with the fast CPU registers at the top and the slow hard drive at the bottom.) In many modern personal computers, the RAM comes in an easily upgraded form of modules called memory modules or DRAM modules about the size of a few sticks of chewing gum. These can quickly be replaced should they become damaged or too small for current purposes. As suggested above, smaller amounts of RAM (mostly SRAM) are also integrated in the CPU and other ICs on the motherboard, as well as in hard-drives, CD-ROMs, and several other parts of the computer system. Hard Disk A hard disk drive (often shortened as hard disk, hard drive, or HDD) is a non-volatile storage device that stores digitally encoded data on rapidly rotating platters with magnetic surfaces. Strictly speaking, drive refers to the motorized mechanical aspect that is distinct from its medium, such as a tape drive and its tape, or a floppy disk drive and its floppy disk. Early HDDs had removable media; however, an HDD today is typically a sealed unit (except for a filtered vent hole to equalize air pressure) with fixed media. HDDs (introduced in 1956 as data storage for an IBM accounting computer) were originally developed for use with general purpose computers. During the 1990s, the need for large-scale, reliable storage, independent of a particular device, led to the introduction of embedded systems such as RAIDs, network attached storage (NAS) systems, and storage area network (SAN) systems that provide efficient and reliable access to large volumes of data. In the 21st century, HDD usage expanded into consumer applications such as camcorders, cell phones (e.g. the Nokia N91), digital audio players, digital video players, digital video recorders, personal digital assistants and video game consoles. HDDs record data by magnetizing ferromagnetic material directionally, to represent either a 0 or a 1 binary digit. They read the data back by detecting the magnetization of the material. A typical HDD design consists of a spindle that holds one or more flat circular disks called platters, onto which the data are recorded. The platters are made from a non-magnetic material, usually aluminium alloy or glass, and are coated with a thin layer of magnetic material, typically 10-20 nm in thickness with an outer layer of carbon for protection. Older disks used iron (III) oxide as the magnetic material, but current disks use a cobalt-based alloy. The platters are spun at very high speeds. Information is written to a platter as it rotates past devices called read-and-write heads that operate very close (tens of nanometres in new drives) over the magnetic surface. The read-and-write head is used to detect and modify the magnetization of the material immediately under it. There is one head for each magnetic platter surface on the spindle, mounted on a common arm. An actuator arm (or access arm) moves the heads on an arc (roughly radially) across the platters as they spin, allowing each head to access almost the entire surface of the platter as it spins. The arm is moved using a voice coil actuator or in some older designs a stepper motor. The magnetic surface of each platter is conceptually divided into many small sub-micrometre-sized magnetic regions, each of which is used to encode a single binary unit of information. Initially the regions were oriented horizontally, but beginning about 2005, the orientation was changed to perpendicular. Due to the polycrystalline nature of the magnetic material each of these magnetic regions is composed of a few hundred magnetic grains. Magnetic grains are typically 10 nm in size and each form a single magnetic domain. Each magnetic region in total forms a magnetic dipole which generates a highly localized magnetic field nearby. A write head magnetizes a region by generating a strong local magnetic field. Early HDDs used an electromagnet both to magnetize the region and to then read its magnetic field by using electromagnetic induction. Later versions of inductive heads included metal in Gap (MIG) heads and thin film heads. As data density increased, read heads using magnetoresista nce (MR) came into use; the electrical resistance of the head changed according to the strength of the magnetism from the platter. Later development made use of spintronics; in these heads, the magnetoresistive effect was much greater than in earlier types, and was dubbed giant magnetoresistance (GMR). In todays heads, the read and write elements are separate, but in close proximity, on the head portion of an actuator arm. The read element is typically magneto-resistive while the write element is typically thin-film inductive.[8] HD heads are kept from contacting the platter surface by the air that is extremely close to the platter; that air moves at, or close to, the platter speed. The record and playback head are mounted on a block called a slider, and the surface next to the platter is shaped to keep it just barely out of contact. Its a type of air bearing. In modern drives, the small size of the magnetic regions creates the danger that their magnetic state might be lost because of thermal effects. To counter this, the platters are coated with two parallel magnetic layers, separated by a 3-atom-thick layer of the non-magnetic element ruthenium, and the two layers are magnetized in opposite orientation, thus reinforcing each other.[9] Another technology used to overcome thermal effects to allow greater recording densities is perpendicular recording, first shipped in 2005,[10] as of 2007 the technology was used in many HDDs. The grain boundaries turn out to be very important in HDD design. The reason is that, the grains are very small and close to each other, so the coupling between adjacent grains is very strong. When one grain is magnetized, the adjacent grains tend to be aligned parallel to it or demagnetized. Then both the stability of the data and signal-to-noise ratio will be sabotaged. A clear grain perpendicular boundary can weaken the coupling of the grains and subsequently increase the signal-to-noise ratio. In longitudinal recording, the single-domain grains have uniaxial anisotropy with easy axes lying in the film plane. The consequence of this arrangement is that adjacent magnets repel each other. Therefore the magnetostatic energy is so large that it is difficult to increase areal density. Perpendicular recording media, on the other hand, has the easy axis of the grains oriented to the disk plane. Adjacent magnets attract to each other and magnetostatic energy are much lower. So, much highe r areal density can be achieved in perpendicular recording. Another unique feature in perpendicular recording is that a soft magnetic underlayer are incorporated into the recording disk.This underlayer is used to conduct writing magnetic flux so that the writing is more efficient. This will be discussed in writing process. Therefore, a higher anisotropy medium film, such as L10-FePt and rare-earth magnets, can be used. Opened hard drive with top magnet removed, showing copper head actuator coil (top right). A hard disk drive with the platters and motor hub removed showing the copper colored stator coils surrounding a bearing at the center of the spindle motor. The orange stripe along the side of the arm is a thin printed-circuit cable. The spindle bearing is in the center. A typical hard drive has two electric motors, one to spin the disks and one to position the read/write head assembly. The disk motor has an external rotor attached to the platters; the stator windings are fixed in place. The actuator has a read-write head under the tip of its very end (near center); a thin printed-circuit cable connects the read-write head to the hub of the actuator. A flexible, somewhat U-shaped, ribbon cable, seen edge-on below and to the left of the actuator arm in the first image and more clearly in the second, continues the connection from the head to the controller board on the opposite side. The head support arm is very light, but also rigid; in modern drives, acceleration at the head reaches 250 Gs. The silver-colored structure at the upper left of the first image is the top plate of the permanent-magnet and moving coil motor that swings the heads to the desired position (it is shown removed in the second image). The plate supports a thin neodymium-iron-boron (NIB) high-flux magnet. Beneath this plate is the moving coil, often referred to as the voice coil by analogy to the coil in loudspeakers, which is attached to the actuator hub, and beneath that is a second NIB magnet, mounted on the bottom plate of the motor (some drives only have one magnet). The voice coil, itself, is shaped rather like an arrowhead, and made of doubly-coated copper magnet wire. The inner layer is insulation, and the outer is thermoplastic, which bonds the coil together after its wound on a form, making it self-supporting. The portions of the coil along the two sides of the arrowhead (which point to the actuator bearing center) interact with the magnetic field, developing a tangential force that rotates the actuator. Current flowing racially outward along one side of the arrowhead and racially inward on the other produces the tangential force. (See magnetic field Force on a charged particle.) If the magnetic field were uniform, each side would generate opposing forces that would cancel each other out. Therefore the surface of the magnet is half N pole, half S pole, with the radial dividing line in the middle, causing the two sides of the coil to see opposite magnetic fields and produce forces that add instead of canceling. Currents along the top and bott om of the coil produce radial forces that do not rotate the head. Floppy disk A floppy disk is a data storage medium that is composed of a disk of thin, flexible (floppy) magnetic storage medium encased in a square or rectangular plastic shell. Floppy disks are read and written by a floppy disk drive or FDD, the initials of which should not be confused with fixed disk drive, which is another term for a (non removable) type of hard disk drive. Invented by IBM, floppy disks in 8-inch (200mm), 5 ¼-inch (133.35mm), and 3 ½-inch (90mm) formats enjoyed many years as a popular and ubiquitous form of data storage and exchange, from the mid-1970s to the late 1990s. While floppy disk drives still have some limited uses, especially with legacy industrial computer equipment,[2] they have now been largely superseded by USB flash drives, external hard drives, CDs, DVDs, and memory cards (such as Secure Digital). 5 ¼-inch disk had a large circular hole in the center for the spindle of the drive and a small oval aperture in both sides of the plastic to allow the heads of the drive to read and write the data. The magnetic medium could be spun by rotating it from the middle hole. A small notch on the right hand side of the disk would identify whether the disk was read-only or writable, detected by a mechanical switch or photo transistor above it. Another LED/phototransistor pair located near the center of the disk could detect a small hole once per rotation, called the index hole, in the magnetic disk. It was used to detect the start of each track, and whether or not the disk rotated at the correct speed; some operating systems, such as Apple DOS, did not use index sync, and often the drives designed for such systems lacked the index hole sensor. Disks of this type were said to be soft sector disks. Very early 8-inch and 5 ¼-inch disks also had physical holes for each sector, and were termed hard sector disks. Inside the disk were two layers of fabric designed to reduce friction between the medium and the outer casing, with the medium sandwiched in the middle. The outer casing was usually a one-part sheet, folded double with flaps glued or spot-welded together. A catch was lowered into position in front of the drive to prevent the disk from emerging, as well as to raise or lower the spindle (and, in two-sided drives, the upper read/write head). The 8-inch disk was very similar in structure to the 5 ¼-inch disk, with the exception that the read-only logic was in reverse: the slot on the side had to be taped over to allow writing. The 3 ½-inch disk is made of two pieces of rigid plastic, with the fabric-medium-fabric sandwich in the middle to remove dust and dirt. The front has only a label and a small aperture for reading and writing data, protected by a spring-loaded metal or plastic cover, which is pushed back on entry into the drive. Newer 5 ¼-inch drives and all 3 ½-inch drives automatically engages when the user inserts a disk, and disengages and ejects with the press of the eject button. On Apple Macintosh computers with built-in floppy drives, the disk is ejected by a motor (similar to a VCR) instead of manually; there is no eject button. The disks desktop icon is dragged onto the Trash icon to eject a disk. The reverse has a similar covered aperture, as well as a hole to allow the spindle to connect into a metal plate glued to the medium. Two holes bottom left and right, indicate the write-protect status and high-density disk correspondingly, a hole meaning protected or high density, and a covered gap meaning write-enabled or low density. A notch top right ensures that the disk is inserted correctly, and an arrow top left indicates the direction of insertion. The drive usually has a button that, when pressed, will spring the disk out at varying degrees of force. Some would barely make it out of the disk drive; others would shoot out at a fairly high speed. In a majority of drives, the ejection force is provided by the spring that holds the cover shut, and therefore the ejection speed is dependent on this spring. In PC-type machines, a floppy disk can be inserted or ejected manually at any time (evoking an error message or even lost data in some cases), as the drive is not continuously m onitored for status and so programs can make assumptions that do not match actual status. With Apple Macintosh computers, disk drives are continuously monitored by the OS; a disk inserted is automatically searched for content, and one is ejected only when the software agrees the disk should be ejected. This kind of disk drive (starting with the slim Twiggy drives of the late Apple Lisa) does not have an eject button, but uses a motorized mechanism to eject disks; this action is triggered by the OS software (e.g., the user dragged the drive icon to the trash can icon). Should this not work (as in the case of a power failure or drive malfunction), one can insert a straightened paper clip into a small hole at the drives front, there by forcing the disk to eject (similar to that found on CD/DVD drives). Some other computer designs (such as the Commodore Amiga) monitor for a new disk continuously but still have push-button eject mechanisms. The 3-inch disk, widely used on Amstrad CPC machines, bears much similarity to the 3 ½-inch type, with some unique and somewhat curious features. One example is the rectangular-shaped plastic casing, almost taller than a 3 ½-inch disk, but narrower, and more than twice as thick, almost the size of a standard compact audio cassette. This made the disk look more like a greatly oversized present day memory card or a standard PC card notebook expansion card rather than a floppy disk. Despite the size, the actual 3-inch magnetic-coated disk occupied less than 50% of the space inside the casing, the rest being used by the complex protection and sealing mechanisms implemented on the disks. Such mechanisms were largely responsible for the thickness, length and high costs of the 3-inch disks. On the Amstrad machines the disks were typically flipped over to use both sides, as opposed to being truly double-sided. Double-sided mechanisms were available but rare. USB Ports Universal Serial Bus connectors on the back. These USB connectors let you attach everything from mice to printers to your computer quickly and easily. The operating system supports USB as well, so the installation of the device drivers is quick and easy, too. Compared to other ways of connecting devices to your computer, USB devices are incredibly simple we will look at USB ports from both a user and a technical standpoint. You will learn why the USB system is so flexible and how it is able to support so many devices so easily Anyone who has been around computers for more than two or three years know the problem that the Universal Serial Bus is trying to solve in the past, connecting devices to computers has been a real headache! Printers connected to parallel printer ports, and most computers only came with one. Things like Zip drives, which need a high-speed connection into the computer, would use the parallel port as well, often with limited success and not much speed. Modems used the serial port, but so did some printers and a variety of odd things like Palm Pilots and digital cameras. Most computers have at most two serial ports, and they are very slow in most cases. Devices that needed faster connections came with their own cards, which had to fit in a card slot inside the computers case. Unfortunately, the number of card slots is limited and you needed a Ph.D. to install the software for some of the cards. The goal of USB is to end all of these headaches. The Universal Serial Bus gives you a single, standardized, easy-to-use way to connect up to 127 devices to a computer. Just about every peripheral made now comes in a USB version. A sample list of USB devices that you can buy today includes: Printers Scanners Mice Joysticks Flight yokes Digital cameras Webcams Scientific data acquisition devices Modems Speakers Telephones Video phones Storage devices such as Zip drives Network connections In the next section, well look at the USB cables and connectors that allow your computer to communicate with these devices. Parallel port A parallel port is a type of interface found on computers (personal and otherwise) for connecting various peripherals. It is also known as a printer port or Centronics port. The IEEE 1284 standard defines the bi-directional version of the port. Before the advent of USB, the parallel interface was adapted to access a number of peripheral devices other than printers. Probably one of the earliest devices to use parallel were dongles used as a hardware key form of software copy protection. Zip drives and scanners were early implementations followed by external modems, sound cards, webcams, gamepads, joysticks and external hard disk drives and CD-ROM drives. Adapters were available to run SCSI devices via parallel. Other devices such as EPROM programmers and hardware controllers could be connected parallel. At the consumer level, the USB interface—and in some cases Ethernet—has effectively replaced the parallel printer port. Many manufacturers of personal computers and laptops consider parallel to be a legacy port and no longer include the parallel interface. USB to parallel adapters are available to use parallel-only printers with USB-only systems. However, due to the simplicity of its implementation, it is often used for interfacing with custom-made peripherals. In versions of Windows that did not use the Windows NT kernel (as well as DOS and some other operating systems) Keyboard Keyboard, in computer science, a keypad device with buttons or keys that a user presses to enter data characters and commands into a computer. They are one of the fundamental pieces of personal computer (PC) hardware, along with the central processing unit (CPU), the monitor or screen, and the mouse or other cursor device. The most common English-language key pattern for typewriters and keyboards is called QWERTY, after the layout of the first six letters in the top row of its keys (from left to right). In the late 1860s, American inventor and printer Christopher Shoals invented the modern form of the typewriter. Shoals created the QWERTY keyboard layout by separating commonly used letters so that typists would type slower and not jam their mechanical typewriters. Subsequent generations of typists have learned to type using QWERTY keyboards, prompting manufacturers to maintain this key orientation on typewriters. Computer keyboards copied the QWERTY key layout and have followed the precedent set by typewriter manufacturers of keeping this convention. Modern keyboards connect with the computer CPU by cable or by infrared transmitter. When a key on the keyboard is pressed, a numeric code is sent to the keyboards driver software and to the computers operating system software. The driver translates this data into a specialized command that the computers CPU and application programs understand. In this way, users may enter text, commands, numbers, or other data. The term character is generally reserved for letters, numbers, and punctuation, but may also include control codes, graphical symbols, mathematical symbols, and graphic images. Almost all standard English-language keyboards have keys for each character of the American Standard Code for Information Interchange (ASCII) character set, as well as various function keys. Most computers and applications today use seven or eight data bits for each character. For example, ASCII code 65 is equal to the letter A. The function keys generate short, fixed sequences of character codes that instruct application programs running on the computer to perform certain actions. Often, keyboards also have directional buttons for moving the screen cursor, separate numeric pads for entering numeric and arithmetic data, and a switch for turning the computer on and off. Some keyboards, including most for laptop computers, also incorporate a trackball, mouse pad, or other cursor-directing device. No standard exists for positioning the function, numeric, and other buttons on a keyboard relative to the QWERTY and other typewriting keys. Thus layouts vary on keyboards. In the 1930s, American educators August Dvorak and William Dearly designed this key set so that the letters th

Wednesday, September 4, 2019

Moral Split and Respect Essay -- Morality Right Wrong Essays

Moral Split and Respect We will always find ourselves in â€Å"moral split† situations. We struggle to make the right decision and hoping that what we decide would be the correct choice. Sometimes our decisions are strictly depended on the notion of self-filling prophecy while others are for the sake of philanthropy. We are selfish if the chosen actions turn out to be a negative impact on the majority of people; however, the negativity is unforeseeable. If we know ahead of time that our decisions are going to be harmful to others then more likely than not we would have tried to avoid that complication. Then again, life is unpredictable. It is unpredictable just like the Vietnam War. Americans went into the war with culture relativism. They thought the decision to assist in the fighting against communism was the ultimate must. They sent young men blindly into a foreign land and were so positive that it was going to be an ideal outcome. If the Northern Vietnamese was defeated, then it might be a different story; however, the consequences they must face. On the other hand, the Vietnamese had two different perspectives of the war. The Southern Vietnamese believed that the Americans were angels sent from above to rescue them from the communists. The Northern Vietnamese thought that the Americans should mind their own business. We cannot say either views were right or wrong, rather, they were picked from the same moral standards but in different circumstances. The South, America and the North yenned for victory. They made decisions that each one truly believed to be the preeminent; therefore, no sides should be unnecessarily criticized. Similarly to us, they were making the right decisions based on personal valuations of ... ...ting will never â€Å"understand everything [and] would be incomplete forever† (249). The only understanding that these people are left with is the pondering of the possible outcome if they have chosen otherwise; not to fight. If that person truly believes that the war is the only way to solve the problem then that it would be ethically correct for him to be involved because morality is based on a person’s own judgment of what is right and wrong. On the other hand, if a person feels that is it wrong, without a doubt, then it is sad to believe that he chooses to go against his morals. Works Cited Johnson, Brendan B. â€Å"The Movie Quotes Site: The Deer Hunter.† (1997). 6 Dec. 2003 . Dirks, Tim. â€Å"Greatest Films: The Deer Hunter.† (1996). 6 Dec. 2003 â€Å"IMDb: Full Metal Jacket.† (1990). 6 Dec. 2003 â€Å"Amazon.com: Apocalypse Now Redux.† (1996). 6 Dec. 2003

Tuesday, September 3, 2019

Epiphany :: Literary Analysis, Joyce and Calvino

World War I and World II are basically the same, right? If so, Araby, written around WWI by James Joyce, and The Flash, written around WWII by Italo Calvino, are also the same, no? Indeed, these short stories have many similarities. At the same time, both stories have many differences. Thus, it is difficult to compare both stories when considering all the details. If the subject of comparison is more specific, such as epiphany, then more emphasis and effort can be put into the comparison. In Araby, the protagonist falls in love with a girl, but love deceives him. In his moment of epiphany, â€Å"[g]azing up into the darkness [he] saw [himself] as a creature driven and derided by vanity; and [his] eyes burned with anguish and anger† (Joyce 1). In The Flash, the protagonist suddenly grasps a reality, but only for an instant: â€Å"[He] stopped, blinked: [He] understood nothing. Nothing, nothing about anything. [He] didn’t understand the reasons for things or f or people, it was all senseless, absurd. And [he] started to laugh† (Calvino 1). The comparison between the epiphanies of both short stories reveals the relationship amongst the similarities and differences regarding theme, symbolism and setting. Most importantly, comparing the themes of both epiphanies reveals they can simultaneously be similar and different. An important common theme in both epiphanies is facing reality. In Araby, the protagonist realizes â€Å"[his] stay was useless† (Joyce 6) since the young lady only â€Å"spok[e] to [him] out of a sense of duty† (Joyce 6). Likewise, in The Flash, the protagonist realizes he â€Å"accepted everything: traffic lights, cars, posters, uniforms, monuments, things completely detached from any sense of the world, accepted them as if there some necessity, some chain of cause and effect that bound them together† (Calvino 1). Both characters face the reality and randomness of the world. Even so, each epiphany implies each protagonist faces a different sort of reality. The protagonist of Araby faces the reality of love and â€Å"[sees himself] as a creature driven and derived by vanity† (Joyce 6). On the other hand, the protagonist of The Flash faces the reality of existence and hopes â€Å"[he] shall grasp that other knowledge† (Calvino 2). Therefore, reviewing the theme similar to both epiphanies leads to discovering different themes as well. Conversely, looking at the differences in the symbolism of each epiphany hints at a comparable aspect of symbolism.

Monday, September 2, 2019

Vagrancy in Sixteenth and Seventeenth Century England :: British History 16th 17th

Vagrancy in Sixteenth and Seventeenth Century England Throughout the work An Account of the Travels, Sufferings and Persecutions of Barbara Blaugdone, there is a common occurrence of imprisonment. Wherever Blaugdone traveled, she seemed to come across some confrontation with the law. This should not be surprising, for in the time period when this work was written many laws, statutes, and acts had been established to thwart the spreading of unpopular Quaker views. Many acts were established primarily to prevent the ministry of Quakerism; however universal laws, especially those to prevent vagrancy, were also used against traveling Quakers. Vagrancy had always been a concern in sixteenth century England, resulting in the passing of four anti-vagrancy bills in 1547 alone. This resulted in legislation so harsh that a person charged with vagrancy could be sentenced to two years enslavement, which could be extended to life enslavement if they tried to escape. When these bills did not seem to prevent the occurrence of beggars on the street, the Vagrancy and Poor Relief Act of 1572 was instated. This act called for a â€Å"three strikes and you are out† policy, where on a person’s third vagrancy offense they could be rightfully put to death (Woodbridge 272). This legislation was the policy for over twenty years until it was repealed in 1593 for being too strict. In 1597, the new Vagrancy Act authorized the government to banish anyone caught offending the vagrancy laws. After a 1598 statute reestablished slavery as the proper punishment for vagrancy, there were a number of years where periods of l eniency and harshness of punishments alternated. It is important to note the history of these laws since many of them were never entirely repealed. However, it was in the early seventeenth century that a particular legislation finally became the common law that would rule for centuries. In 1601, England passed the Act for the Relief of the Poor, which would be the commanding authority on this issue until 1834. This act established the church as the sole establishment responsible for the care of the poor. If a family was not able to get by, it was the responsibility of the area parish to ensure that the family was taken care of (Woodbridge 272).

Sunday, September 1, 2019

Gillette Fusion Essay

Recommendation In order to increase total sales and put Gillette Fusion on track to be a $1 billion business in the next few years, Gillette Fusion should launch a new advertising campaign and reduce cartridge package prices by 20% with the introduction of a onetime coupon. Explanation The media’s reaction to the â€Å"blockbuster† advertisement campaign highlights many of the campaign’s flaws. The campaign focused on the product features rather than its benefits. Due to the ad’s product focus, Gillette failed to communicate why the additional blades and elastomer handle coating improved the quality of consumer’s shave. The proposed advertising campaign would address these flaws and focus on the customer experience. In order to educate consumers, Gillette should employ a mass media campaign similar to Pepsi’s â€Å"Pepsi Challenge.† This campaign will include blindfolded individuals testing and comparing the Gillette Fusion with several other razors, and will be hosted by a celebrity who shares similar brand qualities as the Fusion. The celebrity will be young, sleek and innovative, like Ashton Kutcher or Dwayne Wade. †¢TV Advertisement: The television advertisements will include a short introduction by the celebrity and clips of customers who have taken the challenge describing why the Gillette Fusion is superior to its competitors. It should be featured during male-focused programing, like sporting events and adult comedies. †¢Print Advertisement: The print advertisement will feature a picture of the celebrity next to the razor and several quotes from individuals who have taken the challenge. It should be featured in men’s style magazines (GQ), music magazines (Rolling Stone) and sports magazines (ESPN, Sports Illustrated). †¢Radio Advertisement: The radio advertisement will begin with an introduction by the celebrity describing why he prefers the Gillette Fusion and then transition to reviews by those who have taken the challenge ( i.e. â€Å"The Gillette Fusion is incredible because †¦ If you don’t believe me, hear what people who have taken the challenge have to say for themselves †¦Ã¢â‚¬ ). It should be featured during adult talk shows (Howard Stern Show) and sports programming. In order to encourage users to purchase the product, Gillette should offer a 20% discount on cartridges with the introduction of a one time coupon. Because 64% of men look at the price of the cartridge before  purchasing the corresponding razor, introducing a 20% discount with a onetime coupon will encourage those consumers weary of the high cartridge prices to purchase the razor. Once purchasing and using the razor, it is likely that the consumer will notice a significant difference in the quality of shave, and be willing to pay a higher price for the superior product. This is supported by the fact that of the 9,000 men who tested the new razor, Fusion was preferred 2 to 1 over the competition. For the few who would usually not be willing to pay the higher price for the cartridges, they will likely continue to purchase the cartridges because of the switching costs associated with purchasing a new razor. The coupon should be displayed at points of purchase and in direct mail outs. The former can be achieved by providing collaborators with display allowance. Alternative Strategies Unlike the proposed strategy, each of the alternative strategies fails to simultaneously communicate the effectiveness of the product and encourage customers weary of the price to purchase the product. †¢Lowering the retail price of the razor – This will not encourage customers to purchase the product because (a) 64% of consumers look at the cartridge price before purchasing a razor and (b) a razor is a one-time cost to consumers, with a less elasticity of demand than cartridges. Also, without a new advertising campaign, they also will continue not to understand why the product is superior, and more expensive, than its competitors. †¢Reducing cartridge package prices by changing cartridge package size. Reducing the current four cartridge package with three cartridge packages is only effective if consumers fail to register the difference in package size because the price per cartridge will actually increase. Moreover, this strategy fails to educate consumers on why the product is superior, and more expensive, than its competitors.