Sunday, August 23, 2020

Learn About the Peripheral Nervous System

Find out About the Peripheral Nervous System The sensory system comprises of the mind, spinal rope, and a perplexing system of neurons. This framework is liable for sending, getting, and deciphering data from all pieces of the body. The sensory system screens and organizes inside organ capacity and reacts to changes in the outer condition. This framework can be partitioned into two sections: the focal sensory system (CNS) and the fringe sensory system (PNS). The CNS is made out of the mind and spinal rope, which capacity to get, process, and send data to the PNS. The PNS comprises of cranial nerves, spinal nerves, and billions of tactile and engine neurons. The essential capacity of the fringe sensory system is to fill in as a pathway ofâ communication between the CNS and the remainder of the body. While CNS organs have a defensive covering of bone (mind skull, spinal string spinal segment), the nerves of the PNS areâ exposed and increasingly helpless against injury. Sorts of Cells There are two sorts of cells in the fringe sensory system. These cells convey data to (tangible sensory cells) and from (engine sensory cells) the focal sensory system. Cells of the tactile sensory system send data to the CNS from inner organs or from outside stimuli. Motor sensory system cells convey data from the CNS to organs, muscles, and organs. Physical and Autonomic Systems The engine sensory system is isolated into the substantial sensory system and the autonomic sensory system. The substantial sensory system controls skeletal muscle, just as outside tangible organs, for example, the skin. This framework is supposed to be deliberate on the grounds that the reactions can be controlled intentionally. Reflex responses of skeletal muscle, be that as it may, are a special case. These are automatic responses to outside upgrades. The autonomic sensory system controls automatic muscles, for example, smooth and heart muscle. This framework is additionally called the automatic sensory system. The autonomic sensory system can additionally be isolated into parasympathetic, thoughtful, enteric divisions. The parasympathetic division capacities to repress or hinder autonomic exercises such asâ heart rate, understudy narrowing, and bladder withdrawal. The nerves of the thoughtful division regularly have a contrary impact when they are situated inside indistinguishable organs from parasympathetic nerves. Nerves of the thoughtful division accelerate pulse, enlarge understudies and loosen up the bladder. The thoughtful framework is likewise engaged with the flight or battle reaction. This is a reaction to potential risk that outcomes in quickened pulse and an expansion in metabolic rate. The enteric division of the autonomic sensory system controls the gastrointestinal framework. It is made out of two arrangements of neural systems situated inside the dividers of the stomach related tract. These neurons control exercises, for example, stomach related motility and blood stream inside the stomach related framework. While the enteric sensory system can work autonomously, it likewise has associations with CNS taking into consideration the exchange of tangible data between the two frameworks. Division The fringe sensory system is separated into the accompanying areas: Tangible Nervous System-sends data to the CNS from interior organs or from outer stimuli.Motor Nervous System-conveys data from the CNS to organs, muscles, and glands.Somatic Nervous System-controls skeletal muscle just as outside tactile organs.Autonomic Nervous System-controls automatic muscles, for example, smooth and heart muscle.Sympathetic-controls exercises that expansion vitality expenditures.Parasympathetic-controls exercises that ration vitality expenditures.Enteric-controls stomach related framework action. Associations Fringe sensory system associations with different organs and structures of the body are built up through cranial nerves and spinal nerves. There are 12 sets of cranial nerves in the cerebrum that build up associations in the head and chest area, while 31 sets of spinal nerves do likewise for the remainder of the body. While some cranial nerves contain just tactile neurons, most cranial nerves and every single spinal nerve contain both engine and tangible neurons.

Saturday, August 22, 2020

Macroeconomic Variables and Equity Market Relationship

Macroeconomic Variables and Equity Market Relationship Presentation The value showcase otherwise called securities exchange is the market for purchasers and dealers to exchange their value instruments. There are a couple of sorts of value protections, the most widely recognized type of value protections are favored stock and basic stock. Value showcase is significant for an organization since it permits an organization to obtain assets without bringing about obligations. In any case, not all the organizations are permitted to give shares, just open recorded organization which is a restricted risk organization are permitted to give share for the deal to general society. The purchasers of the stock additionally turned into a responsibility for partnership and normal investors reserve the privilege to decide on issues critical to the enterprise. Organization delivers their investors profit every year base on the benefit of the years. There are two principle branches for financial specialist to exchange the corporate stock, which are sorted out trade and over-the-counter (OTC). Sorted out trade exchanging is represented by guidelines and formal methods to guarantee the adequacy of the market. Nonetheless, stocks that exchanged through over-the-counter is increasingly casual and utilizes electronic to exchange. From the past, measurements has demonstrated that stock costs can be dictated by the monetary elements. Writing Review The target of the creators to do the exploration is to inspect the connection between macroeconomic factors with value showcase. The examination can encourages partner to see increasingly about value showcase and the effect of macroeconomic factors toward value advertise. Kim, McKenzie, and Faff (2003) had researched the effect of booked declarations made by government for macroeconomic factors toward the hazard and return of three significant US money related markets which incorporate value showcase. Ioannidis and Kontonikas (2007) had researches the effect of fiscal arrangement on value advertise execution in 13 OECD nations. Abugri (2006) had examined the connection between macroeconomic factors with value advertise execution. Hooker (2004) had researched the macroeconomic factors to foresee the value showcase execution utilizing the Bayesian model created in Cremers (2002). Patel (2012) had do the exploration in on Indian Stock Market for the impact of macroeconomic determinants on the exhibition of market. Trivedi and Behera (2012) and Prof. Sangmi and Hassan (2013) likewise had complete research on Indian Stock Market for the connection between value costs and macroeconomic factors. Abdelbaki (2013) had utilized Autoregressive Distributed Lag Model to look at the connection between macroeconomic factors and Bahraini value advertise. Verma and Ozuna (2004) had completed an experimental examination for the result of Latin American financial exchanges impact by the macroeconomic factors. Maysami, Howe and Hamzah (2004) had analyzed the cointegration between macroeconomic factors and stock market’s part lists as opposed to the composite record. A large portion of the diaries had picked loan cost and cash gracefully as one of the macroeconomic variable which will influence the value advertise. In any case, remote conversion standard, expansion rate, modern creation, total national output, outside direct venture, joblessness rate, gold cost and securitie s exchange record likewise mainstream macroeconomic factors used to do the examines. Other than that, couple of creators additionally utilized some disagreeable factors, for example, equalization of exchange, shopper value record, maker value file, instability in outside market and retail deals development to do their looks into. The accompanying table shows the macroeconomic factors utilized by the creators to do their looks into. The explanation that the creators lead the exploration is to give experimental proof and furthermore expand the examination zone that past scientists voided. The explanation that Kim, McKenzie and Faff (2004) do this exploration is on the grounds that writing just research about the news declaration without examine about the effect of significant macroeconomic factors declaration and the genuine news declaration that unique in relation to the participant’s desire that mirror the stock cost. Ioannidis, Kontonikas (2008) grow the writing of the critical of fiscal approach and stock cost by including profit installment of stock return of 13 OECD nations. Abugri (2008) inspect whether the macroeconomic pointer could fundamentally clarify the financial exchange returns of Latin American. Verma and Ozuna (2005) research whether macroeconomic development essentially impacts the value market of other Latin American nations. Hooker (2004) expands the exploration by including macroecono mic factors to inspect the normal developing value advertise return. Patel (2012), Trivedi and Behera (2012) examine the current writing by including eight progressively macroeconomic factors to test the impact of macroeconomics as determinant on the presentation of the Indian securities exchange. Sangmi and Hassan (2013) look at the impacts of macroeconomic factors on Indian financial exchange in the Arbitrage estimating hypothesis (APT). Abdelbaki (2013) complete the examination to locate the critical connection between macroeconomic factors and Bahraini securities exchange advancement (BSMD). Maysami, Lee, and Hamzah (2004) expand the examination between macroeconomics variable and stock market’s segment records rather than the composite file and inspect connections between chose macroeconomic factors and the Singapore’s securities exchange file (STI), and Singapore Exchange Sector files. So as to decide the relationship of the macroeconomic factors and stock return, there was assortment of test utilized by various analysts for various purposes. In the first place, Ioannidis and Kontonikas (2008) utilized the Jarque-Bera test to test for the typicality. They demonstrate that stock returns are non-typically disseminated which driving the aftereffects of speculation testing invalid. By assessing the non-ordinariness stock returns, bootstrap investigation was embraced. The specialists likewise utilized the conventional least squares technique and the Newey-West heteroscedasticity predictable covariance grid estimator strategy to look at the negative connection between stock returns and financing costs. In addition, Patel (2012), Trivedi and Behera (2012), and Maysami, Lee, and Hamzah (2004) found that the Johansens cointegration test (Johansen and Juselius, 1990) is more impressive in assessing the cointegrating vectors than Engle and Granger’s (1987). This is on the grounds that cointegration can be tried in a full arrangement of conditions under one strategy, without requiring a particular variable to be standardized. This empowers specialists to abstain from conveying unnecessary mistakes from the first-into the subsequent advance. It additionally permits the evasion of from the earlier of suppositions of endogenity or exogeniety. Additionally, the Johansen system fuses dynamic co-developments or concurrent communications, which empower specialists to consider the channels through which of the macroeconomic factors influence the advantage costs just as their relative significance. Besides, Trivedi and Behera (2012), Patel (2012), and Verma and Ozuna (2005) recognize that the Argumented Dickey-Fuller unit root test must do first to discover the non-fixed of the factors before the Vector Error Correction which is utilized to research the since quite a while ago run relationship and short-run elements among the factors. For the ADF test, dis miss the invalid speculation of non-stationarity for all the arrangement. At that point gauge the model in log first contrast if the given log first distinction of all arrangement is fixed which help to guarantees that the arrangement of information don't comprise of unit roots issue and this could maintains a strategic distance from the deceptive connections. Furthermore, Trivedi and Behera (2012) and Abugri (2008) gauges motivation reaction capacities (IRFs) which are gotten from the Vector Autoregressive Model (VAR). This appraisals is utilized to measures the time profile of the impact of a stun on the conduct and examine the dynamic relationship of value costs with macroeconomic factors. Lutkenpohl (1991) states that relying upon the requesting of the factors in the VAR model, the outcomes from motivation reaction capacities may have large extraordinary which may expose to the â€Å"orthogonality assumption†. Subsequently, Koop, Pesaran, and Potter (1996), and Pesaran an d Shin (1998) battle the issue by utilized â€Å"generalized† motivation reaction capacities which are invariant to any reordering of the factors in the VAR and furthermore serves to any to guarantee that the outcomes are not dependent upon the symmetry supposition. To wrap things up, Hooker (2004) utilized the Bayesian model determination approach. Because of the outcomes could be delicate to display determination issue, especially while remembering extra factors for the relapses. Then, the hypothesis gives little direction as which macroeconomic factors ought to be incorporated and prohibited. This methodology considers all attainable (direct) mixes of included logical factors, appoints them every level priors of incorporation, and evaluations their back probabilities. The reason for the analysts is to inspect the critical or irrelevant connection between macroeconomics factors and value advertise. Kim, McKenzie and Faff (2003) found a noteworthy connection between value market and value data of client and maker. This can be confirm by government declarations relating sudden equalization of exchange news, security showcase and monetary market instability which have incredible effect and essential to the inward economy and in this way impact value return. Ioannidis and Kontonikas (2007) distinguish that the connection between loan fee of fiscal arrangement and expected value return is noteworthy. This had demonstrated that the national bank can changed the loan fee to impact charge of securities exchange. In addition, Abugri (2006)

Friday, August 21, 2020

Retail Marketing Essay Example | Topics and Well Written Essays - 1250 words

Retail Marketing - Essay Example The UK general store fragment has as of late grasped different store designs. Among these general stores is Tesco market which has grasped these sorts of organizations: the Metro, the customary grocery store and the additional configurations. Each arrangement is intended for a one of a kind objective market, which utilizes Tesco as the brand (Cox and Brittain, 2000, p. 29). Standard configuration Tesco grocery store is all the more regularly arranged inside an exceptional separation of downtown areas and towns. These configuration stores are arranged specifically territories that can oblige a foundation that is greater in size, despite the fact that space can't satisfactorily suit it. Concerning consumer’s point of view, the store’s standard arrangement has the ability of offering more assortments of items. A significant number of the stores’ area in the standard arrangement guarantee that the customers are not required to have a vehicle to appreciate the advanta ges of this configuration. Extra is the lead design for the new Tesco stores. Tesco’s additional stores in this organization are huge retail foundations that are frequently worked in away areas. They are either worked as a major aspect of a more extensive away retail extends or as independent tasks. Tesco Extra arrangement accentuations are on the arrangement of a wide scope of administrations and merchandise that are accessible in the littler standard grocery store format. Tesco Extra configuration items are not just the head non-food and the center food things however are typically connected inside the area of supermarkets.... Each arrangement is intended for a remarkable objective market, which utilizes Tesco as the brand (Cox and Brittain, 2000, p. 29). Standard configuration Tesco market is all the more regularly arranged inside a remarkable separation of downtown areas and towns. These organization stores are arranged specifically territories that can oblige a foundation that is greater in size, despite the fact that the space can't enough suit it. As to consumer’s point of view, the store’s standard organization has the capacity of offering more assortments of items. A considerable lot of the stores’ area in the standard arrangement guarantee that the customers are not required to have a vehicle to appreciate the advantages of this configuration (Berman and Evans, 2004, p. 9). Extra is the lead position for the new Tesco stores. Tesco’s additional stores in this organization are huge retail foundations that are regularly worked in away areas. They are either worked as a com ponent of a more extensive away retail extends or as an independent tasks. Tesco Extra arrangement accentuations are on the arrangement of a wide scope of administrations and products that are accessible in the littler standard general store position. Tesco Extra organization items are not just the head non-food and the center food things, however are regularly connected inside the part of markets. These Extra configuration stores stock items incorporate kitchenware, dress, budgetary administrations and items, and gadgets. Other assistant administrations and products that can be purchased incorporate petroleum and marked food commonly sold at gas stations and in the cafés situated in the stores (Bhatia, 2008, p. 47). Tesco Metro design guarantees that the store is in a reasonable area that is near spots

Knowledge Management A Personal Knowledge Network Perspective

Questions: Assignment 1: Assessing data and information needs Assignment 2: Stakeholders, Personal systems administration and Decision-production process Assignment 3: Communication Processes in an Organization Assignment 4: Improving Access to Systems of Information andKnowledge Answers: Assignment 1 1. To change the situation that has been going on some extreme choices are to be taken to deal with the patients interests and to mind of them from unacceptable treatment. The patient must be kept in the cutting edge of everything and it very well may be finished by receiving the accompanying ways To put accentuation on and responsibility to the normal benefits of putting the patients life first of everything else all through the framework. To make the accessibility of standard social insurance frameworks and rebelliousness to the inadequate gear. To make thorough strategies against the utilization of unsatisfactory gear. To advance solid initiative in proficient qualities, for example, nursing and furthermore offer consistent help to the positions of authority. The data ought to be imparted to every last one. It ought to be effectively open by any individual who needs it or for execution correlation by any individual, administration organization or foundation. Guarantee that exploration is being attempted by the association shows restraint focused and not for money related reason. The result of the exploration ought to likewise be of significance to the patient. They ought not be doing any close to home kindness to any patient by picking them for look into yet every patient ought to reserve the privilege to be engaged with an exploration (Walshe, 2010). 2. The NHS is an exceptionally different association and they need to discover the sources and data to choose the means to be taken to execute the methodologies of the establishment to deflect the circumstances that happened (Ham, 2013). They can accumulate cases and afterward consider alleviation gauges by the assistance of following sources The friend audit is one of the significant sources from where such cases can be distinguished. These audits are made based on concerns and regularly they were not kidding concerns which brings up issues of a sheltered assistance by the administration and thus the data can be accumulated from the surveys and followed up on to actualize them. The evaluators report likewise accomplish a similar work of distinguishing issues in the administration and sending them to the sheets for the inadequacies in the associations chance administration methods. In this manner the reports can be considered and the flaws could be made sense of and consequently alleviation measures can be arranged from the reports. Yearly studies are an extraordinary method of discovering the worries with respect to the staff and the patients who are the individuals engaged with the studies. Thus on considering the overviews the Trust may discover the specific positions the association is slacking and furthermore the branches which are not performing admirably, with the goal that change can be achieved in the spots required. The Royal school of Surgeons who detailed the NHS to slack in careful techniques and announced their procedure as useless can be a decent wellspring of basic survey and may call attention to the issues looked by the careful division and executions might be done as needs be (Ham, 2012). 3. The sources or the data can be assembled and actualized by the accompanying individuals obviously superior to some other sources. These individuals or gatherings are referenced underneath Official and the non-official chiefs are the individuals liable for the accomplishment of the NHS including the administration of hazard and consistence with significant enactment. The review board of trustees can be a decent wellspring of upkeep of the hazard the executives framework. The interior reviewers work with the review council as they have the inside and out information on the dangers of the administration in subtleties. The CEO is responsible for the upkeep of a decent arrangement of control and that underpins the association objective. The official group is answerable for the audit of the corporate dangers. All the staff individuals are answerable for the administration of the dangers and are the best wellsprings of data with respect to the administration of the clinic (Ham, 2011). 4. The accompanying suggestions may improve the odds of the strategies of the association There ought to be a standard of following the regular estimations of the establishment, it ought to be installed and viably imparted in the NHS constitution and ought to be claimed and lived by all individuals from the association. The patients must be the primary goal of whatever is done by the NHS inside the accessible assets, they should get compelling consideration, the staffs must be empathetic to the patients, the staffs must be submitted and work with a typical culture and the patients ought to be shielded from any damage and get most extreme consideration. The constitution ought to agree to all sets of accepted rules and rules and guideline for the staff to consent to and obey them individually. Everyones obligation ought to be to conform to the given gauges referenced in the constitution (Ham, 2011). Assignment 2 1. The most significant partners who have high force and enthusiasm for the exercises of the NHS are Officials HCC suppliers Staff side Neighborhood full time officials Uncommon delegates Connections HC representatives East of England SHA General wellbeing GPs Board individuals Screen Neighborhood intense trusts In the following gathering comes the gathering B partners who are regularly hard to deal with. They are commonly local agents or from the authoritative bodies. More often than not they are latent however can compel an enormous hit to the association. Accordingly it is important to include them in the goals of the association (Ham, 2010). The partners in bunch C are effectively engaged with the working of the association, however they have next to no impact yet their commitments to the association can be important. They are willful association, neighborhood networks and so forth. The partners in bunch D are less included and in this manner just kept educated yet not permitted to take any choices. 2. The gathering B partners ought to be associated with in light of the fact that they have solid connections at AD level with CSF, they have solid board commitment, interfaces the board and the chief set up, sets up the joint arranging advisory group, they have phenomenal connections at executive and activity level with nearby intense trusts (Ham, 2006). The gathering B partners are additionally congenial in light of the fact that they have great connection with wellbeing examination advisory group, the break head of interchanges get positive associations with neighborhood media by demonstrating positive reports, create solid nearby client gatherings and furthermore builds up a positive CQC connection. The gathering C partners can be kept all around educated as they can give assistance in the client and patient input systems set up, the eager surveys are likewise kept set up (Dixon, 2005). The gathering D partners are not should have been included. It is only a need to keep up great social connection with them. 3. The partners in bunch A for example GP Consortia permits in the new advancement of relations. The fortifying of JNC relations is made by the assistance of this gathering. They would assist with remaking a suffocating framework by keeping up new relations. The improvement of HCC relations with NHS must be made as quick as could reasonably be expected (Coombes, 2008). The gathering B must be given the activity of creating relations with the CFT enrollment and governors, the discussion with CFT, advancement of the relationship with MPs and to connect as right on time as conceivable with Health and Wellbeing load up. The gathering C must be approached to work intimately with the intense trusts to put the administrations into the network. Create vital organization with the autonomous area to make sure about business (Beecham, 2000). The gathering D can give national stage to the association to create as a network supplier. 4. The partner must be overseen in any case and afterward with a reasonable structure set up the plans can be planned for the partners. It will be overseen as follows There will be a lead executive for partner commitment and this chief will be answerable for directing commitment methodologies and assess its viability and report to the board. For the partners in the gathering of key players, every partner will have an executive who will be the key connection with the association. The associate executives are relied upon to be connected with the PBC bunch on a month to month premise and structure solid relations with the key dispatching gatherings and create key improvement techniques (Beecham, 2000). An every other month partner discussion will be created called partner the executives council where the chiefs and right hand executives will give reports of advances and progress on key issues with key partners. This will permit a key issue to be taken and afterward settle on it that when it will be actualized to permit the techniques of the association to be executed as arranged. Differential systems are utilized to manage various partners and to arrange for what connection is to be created with the partner that is whether it ought to be finished up or expanded (Amine Chatti, 2012). The particular methodologies for the partners will be as per the following A focused on partner the executives Official to official arrangement of accomplices Operational connections oversaw all the more effectively The executives of the proactive association Partner occasions to include accomplices and patients and improve connections Administration Development Panels created to survey administration proposition. To expand the participation at accomplice sheets Assignment 3 1. The present report that is being indicated is the report of Gloucestershire Hospitals of NHS establishment trust. All the individuals from the staff of the medical clinic has a significant task to carry out in the successful correspondence and to build up the system of correspondence inside with the goal that the uplifting news of the establishment is spread everywhere throughout the media. Correspondence isn't constrained to the communicati

Thursday, July 9, 2020

Competition and Market Power Essay - 1100 Words

Competition and Market Power (Essay Sample) Content: Imperfect Competition: Competition and Market PowerNameInstitutionSubjectDifference in Demand CurveIn a perfectly competitive market, the demand curve usually slopes downward reflecting that possible fact the quantity demand of a good decreases as the price increases. Price usually determined by the intersection of the market demand and the existing supply in the market (Robinson, 1933). The market supply determines the market price of goods produced. The existing firms are forced to charge to towards the equilibrium price of the existing markets or the consumers move their focus to purchasing other companies goods that have lower prices. A companyà ¢Ã¢â€š ¬s demand curve is, therefore, equal to the market's equilibrium price (Greenhut, Norman Hung, 1987). The demand curve also varies significantly from that of the existing market. The demand curve is horizontal and equal to equilibrium price while the demand curve for the market slopes downward. The horizontal n ature of the curve indicates how the request of the goods is perfectly elastic. Any change in the prices of the goods by and the individual firm will lead to no sales or loss. Firms existing in this kind of market increase their market share by initiating strategies to offer lower priced goods than the existing competitors (Greenhut et al., 1987).However, monopolies are faced with the extensive of market control of prices. The firms face negatively- sloped demand curves. The forced to lower their prices to enable them sell their larger quantities of produce. Decrease in price leads to the increase in the amount required. The shape of the demand curve, however, has several implications for firms in his market. The downsloping of the demand curve is due to the market power that a company has. They choose to raise or lower their prices without losing their customers. Unlike firms in the perfect competition market, the monopoly companies have the ability to dictate the price of their c ommodities. They have market power due to fewer competitors. The firms thus majorly focus on product differentiation or differences unrelated to price.Companies dealing in electricity transmission are monopolies. Power regulation has changed rapidly. However, these changes only affect the companies that generate electricity. The transmission and distribution are not under serious consideration due to conventional reasons that claim the distribution and transmission are natural monopolies (Robinson, 1933). The industry built on the belief that economies of scale render the business a monopoly. The stateà ¢Ã¢â€š ¬s law also allows the single only firm to supply power to most residents and other companies. Most of the power transmission companies are however politically produced monopolies. The government intervenes the formation of these firms to protect customers from the abuse that rise from exploitative monopolies. Few companies exist that transmit electricity from the country. . This industry characterized by the lack of competition, and it also experiences the absence of a viable substitute for the product. A single producer has control over both the production price and the market price of the goods produced.A MonopolyGoogle is an example of a monopoly. It started with a simple search engine but has managed to grow into the pursuit of every tech- related economy. The firm took off due to their innovative and power marketing techniques and managed to expand to offer other services like email, online data storage, GPS technology and many more (Robinson, 1933). Now it has control of more than 90 percent of the total market share. The company imposes control over the market via preferential placements. The company promotes its services at the top of its search results which has led it to favor its price comparison. It also dominates the global search market and can penalize competitors. The company has virtually unassailable the existing competitive advant age. They have the capacity to deploy the present strength beyond other confines of search to any other service of their choice. It suppresses new entrants, and their innovations become imperiled. Most monopoly firms maintain their power by forming barriers to entry into the market by other competitors by dealing in productions of larger economies of scale, predatory pricing or limiting pricing. Other firms also have perpetual ownership of the existing scarce resources and have developed a brand loyalty with their consumers.TYPES OF FIRMSThe industry operates on the basis of products, geographical reach, and consumer targets. Each industry comprises of numerous firms. There exist four basics types of industry.PERFECT COMPETITIONIt is the industry infrastructure that comprises some firms that deal in the production of similar products or substitutes. Consumers also tend to have accurate and complete information regarding their prices (Joekes et al., 2008).MONOPOLYPure monopoly indu stry comprises of a single firm that specializes in production or supply of a product or a service that has no close substitutes. The single player controls all the existing resources and technology and blocks competitors from entering the industry.MONOPOLISTIC COMPETITIONThis contest pertains to industries that have both the characteristics of competition and monopoly. The industry has many firms that offer subst... Competition and Market Power Essay - 1100 Words Competition and Market Power (Essay Sample) Content: Imperfect Competition: Competition and Market PowerNameInstitutionSubjectDifference in Demand CurveIn a perfectly competitive market, the demand curve usually slopes downward reflecting that possible fact the quantity demand of a good decreases as the price increases. Price usually determined by the intersection of the market demand and the existing supply in the market (Robinson, 1933). The market supply determines the market price of goods produced. The existing firms are forced to charge to towards the equilibrium price of the existing markets or the consumers move their focus to purchasing other companies goods that have lower prices. A companyà ¢Ã¢â€š ¬s demand curve is, therefore, equal to the market's equilibrium price (Greenhut, Norman Hung, 1987). The demand curve also varies significantly from that of the existing market. The demand curve is horizontal and equal to equilibrium price while the demand curve for the market slopes downward. The horizontal n ature of the curve indicates how the request of the goods is perfectly elastic. Any change in the prices of the goods by and the individual firm will lead to no sales or loss. Firms existing in this kind of market increase their market share by initiating strategies to offer lower priced goods than the existing competitors (Greenhut et al., 1987).However, monopolies are faced with the extensive of market control of prices. The firms face negatively- sloped demand curves. The forced to lower their prices to enable them sell their larger quantities of produce. Decrease in price leads to the increase in the amount required. The shape of the demand curve, however, has several implications for firms in his market. The downsloping of the demand curve is due to the market power that a company has. They choose to raise or lower their prices without losing their customers. Unlike firms in the perfect competition market, the monopoly companies have the ability to dictate the price of their c ommodities. They have market power due to fewer competitors. The firms thus majorly focus on product differentiation or differences unrelated to price.Companies dealing in electricity transmission are monopolies. Power regulation has changed rapidly. However, these changes only affect the companies that generate electricity. The transmission and distribution are not under serious consideration due to conventional reasons that claim the distribution and transmission are natural monopolies (Robinson, 1933). The industry built on the belief that economies of scale render the business a monopoly. The stateà ¢Ã¢â€š ¬s law also allows the single only firm to supply power to most residents and other companies. Most of the power transmission companies are however politically produced monopolies. The government intervenes the formation of these firms to protect customers from the abuse that rise from exploitative monopolies. Few companies exist that transmit electricity from the country. . This industry characterized by the lack of competition, and it also experiences the absence of a viable substitute for the product. A single producer has control over both the production price and the market price of the goods produced.A MonopolyGoogle is an example of a monopoly. It started with a simple search engine but has managed to grow into the pursuit of every tech- related economy. The firm took off due to their innovative and power marketing techniques and managed to expand to offer other services like email, online data storage, GPS technology and many more (Robinson, 1933). Now it has control of more than 90 percent of the total market share. The company imposes control over the market via preferential placements. The company promotes its services at the top of its search results which has led it to favor its price comparison. It also dominates the global search market and can penalize competitors. The company has virtually unassailable the existing competitive advant age. They have the capacity to deploy the present strength beyond other confines of search to any other service of their choice. It suppresses new entrants, and their innovations become imperiled. Most monopoly firms maintain their power by forming barriers to entry into the market by other competitors by dealing in productions of larger economies of scale, predatory pricing or limiting pricing. Other firms also have perpetual ownership of the existing scarce resources and have developed a brand loyalty with their consumers.TYPES OF FIRMSThe industry operates on the basis of products, geographical reach, and consumer targets. Each industry comprises of numerous firms. There exist four basics types of industry.PERFECT COMPETITIONIt is the industry infrastructure that comprises some firms that deal in the production of similar products or substitutes. Consumers also tend to have accurate and complete information regarding their prices (Joekes et al., 2008).MONOPOLYPure monopoly indu stry comprises of a single firm that specializes in production or supply of a product or a service that has no close substitutes. The single player controls all the existing resources and technology and blocks competitors from entering the industry.MONOPOLISTIC COMPETITIONThis contest pertains to industries that have both the characteristics of competition and monopoly. The industry has many firms that offer subst...

Thursday, July 2, 2020

Quality analysis for data in route optimization - Free Essay Example

1 Introduction 1.1 Background In the first quarter of 2009, Itella Oyj, the company in charge of postal services within Finland, initiated a project to optimize their delivery routes in both Early Morning Delivery and Daily Mail Delivery. The main goal of this project was to make more efficient the delivery processes due to changing trends in demands of conventional methods of information and mail communication. Adapting to these changing trends meant more than just the maintenance of financial growth but also conformation to higher standards for achieving a greener environment. Previously, route measurement had being done by regional planners who used conventional means of route measurement. This meant physically travelling to the various regions and measuring and calculating distances according to set parameters. This project intended to streamline this process by adaptation of new systems and integration with already existing ones which we shall look at later. 1.2 Purpose of the thesis However, as it is with many new company projects in their initial stages; challenges are inevitable all sorts of problems are likely to be encountered. One of the major issues faced in the project is ensuring that the data used in the process of route optimization is in tip top shape. Problems with quality in the data have resulted in nearly sabotaging obstacles to the optimization process. These problems have resulted in delays in schedules and thus increase in costs for the company. There have also being high unbudgeted costs due to corrections of faults. The purpose of this thesis is to examine and analyze current quality for data used in route optimization, and possibly formulate quality standards from the analysis done. The research questions are therefore broadly divided into four as listed below; What is data quality? What is the importance of certain quality levels for the company? What are the methods used to describe and analyze data quality and data quality process? What is the current level of quality and have the resources invested being worthwhile? The above questions give a guideline of the issues we shall look more deeply into in this paper. 2 Literature Review 2.1 Quality 2.1.1 Introduction This chapter captures the definition of quality and more specifically data quality and its importance to a companys processes and benefits of good quality as well as issues that require paramount concern with regard to maintaining good Quality. The line between the data quality and process quality as we shall see is pretty thin and therefore these two elements shall be referred to many times in this document. 2.1.2 Defining Quality To be able to understand data quality, quality in itself has to be defined. Quality has, for the past few decades, been considered the cornerstone for excellence and competitive edge for a majority of companies that have gained a stronghold in their areas of operations. Just like beauty, quality is in the eyes of the beholder and in a business environment, the beholder is always the client or end-user, In other words, quality is whatever the customer says it is. Many scholars have come up with different definitions of quality and sometime you have to narrow down to the nature, degree and rationale you are considering in your definition of quality. According to Ivancevich et al. (2003) Quality is the function of policy, information, engineering and design, materials, equipment, people, and field support. Quality means getting it right first time, rather than merely laying down acceptable level of quality (Philip Crosby, 1995). Quality is the degree that something will conform to the requirements. This needs to be defined firstly in terms of parameters or characteristics, which vary within processes. For example, for a mechanical or electronic product these parameters would be performance, reliability, safety and appearance. Quality is being creative, innovative, fluid and forthright. (Drucker, Peter (1985). Innovation and entrepreneurship. Harper Row) 2.1.3 What then is data quality Data Quality(acronym DQ) is a process entity that is multidimensional. The multidimensional aspect is due to the complexity of this entity and the difficulty to singularly define it. Leo Pipino and Co., in article on Data Quality Assessment, defined data quality as having more than 10 dimensions. However, other experts have analyzed these dimensions and have narrowed them to following; accuracy, consistency, completeness, timeliness and auditability. (Andrew Greenyer, vice president, international marketing for Pitneys group also notes that in addition to these aspects, an organization should also make sure that everyone has a common understanding of what the data represents. (Andrew Greenyer on November 26, 2007) https://www.customerthink.com/article/importance_quality_control_how_good_data Below is an excerpt from Excution MIH that aims at defining these 5 dimensions. Accuracy of data is the degree to which data correctly reflects the real world object OR an event being described example an address of customer in a customer database is the real address. Completeness of data is the extent to which the expected attributes of data are provided. For example, customer data is considered as complete if all customer addresses, contact details and other information are available Consistency of Data means that data across the enterprise should be in synch with each other. For example, if a customer changes their address but they are still linked to both the old and new addresses. Data Timeliness This is is an aspect that is reflected in how deadlines and schedules are met within a process. In addition, this is also the availability of data when it is needed. Data Auditability is its ability to be examined and analyzed to determine its level of accuracy and possible discrepancies or inconsistencies With this in mind we can therefore conclusively state that data quality is a continuously adaptive state of being in a no-error zone as a result of continuous engagement in functions that aim at achieving efficiency and accuracy in results as well as processes. 2.2 Types of quality Jean, from International food safety and Quality Network, defines quality in two ways. It can be either subjective or objective. He states that objective quality is the degree to which a process or the outcome of a process sticks within a predetermined set of criteria, which are presumed essential to the ultimate value it provides. On the other hand, he continues to describe the other side of quality which is subjective. This kind of quality is the level of perceived value reported by the end-user who benefits from a process or its outcome. Example: pain relief provided by a medication. In both cases he links quality to the ultimate end-product from a process. It is difficult to separate this two especially when thinking about route optimization since both go hand in hand. In an article of data quality assessment, Leo L. Pipino and company discuss about three important steps a company should take to improve process and data quality as a whole. These are: Performing subjective and objective data quality assessments; Comparing the results of the assessments, identifying discrepancies, and determining root causes of discrepancies; and Determining and taking necessary actions for improvement The FIGURE below gives a clearer picture of the issue discussed above 2.3 Quality Management When a company incorporates quality in its processes, its overall objective is to satisfy the parties involved at low costs while maintaining process efficiency. Quality is an ever evolving perception determined by the value provided by the end result of a process. In other words, quality is an adaptive process in its own capacity that is receptive to changes within a process as it matures and other alternatives emerge as a basis for comparison. Eventually, the basis for assessing how a companys process incorporates quality is evaluated by the end-result in terms of cost savings; resources used and increased value to the company from the process. Quality in a process is not what the investor puts in but what the end-user gets out and what the customer is willing to pay for in the end result. Quality means best for the following conditions (a) the actual use and (b) the selling price (Feigenbaum, 1983). Therefore if the companys processes ignore quality the eventuality is low customer satisfaction which leads them to reducing their investments or spending interests for the company. Consequently this leads to reduced incomes and as result diminished mark up. This simply means that since quality within a companys processes has an effect its financial value through both costs and incomes, it is the backbone of being a niche company in the area of your operation. Quality Management means that the organizations culture is defined by and supports the constant attainment of customer satisfaction through an integrated system of tools, techniques, and training (Sashkin Kiser, 1993).This definition further emphasizes the need for the organizations culture to fully support quality at all times in its operation by making it an integral part of the company, the center nut that glues the companys activities. Most importantly and especially with reference to this research, this entails continuous improvements to the processes, functions and systems. At a bare minimum no shoddy work should be part of the company, from the top management to the bottom. It should be engraved in the companys culture and code of conduct that quality is part and parcel of the companys operations. It is almost impossible to separate the processes and functions from human factors. Management as well as employees derive satisfaction and from good results. When quality has being properly integrated into a companys culture, the results generate emotions and feelings within the parties who have being involved in the process.A result that brings smiles to management, employees and most importantly, the client, defines having achieved good quality. Youll know it, theyll know it, and the company will prosper from it.This is testament to the fact that employees exude a lot of satisfaction when they discover that, not only is management proud of their work but also the customer. 2.4 Benefits of good quality Since we now basically have an understanding of what quality is, why is it then so important to a company to maintain high quality levels for both data and processes? There are certain benefits that are associated with good data quality whict accrue to the company as well as the end user who happens to be the consumer. Good quality is a result of reduction of process and data defects. This is because there is Total Quality Management that promotes quality awareness and participation of all members of the organization. It means quality at the source, which translates to reduced wastage in the companys processes thus translating to cost saving. Good quality data leads to ease of problem solving. Through processes such as failure analysis and measurement standards developed during quality analysis procedures, defects and failures (even potential failures) can be identified with ease, which means that a problem is solved quickly translating to saved man hours. These man hours can then be released to venture into other tasks. For example if a problem is encountered within a companys process, it would be easily solved due to the parameters in place that would help identify the cause of failure and have it addressed. Good quality also makes it easy to give direction for continuous improvement of processes. It also aids in the improvement of systems and increasing employee efficiency. This will be through ensuring that the employees are continuously trained on the importance of embracing quality in their work and always proffering quality services to the customer (end-user). As for the systems, by virtue of being subjected to change and conformance to potentially demanding processes, it becomes easier to identify key areas needing adjustment or improvement.. Good data quality leads to quality results, which in turn translate to customer satisfaction. Customer satisfaction is a key foundation block in not only maintaining profitability but also increasing market share. In addition, a company with satisfied clients is always at an advantage of maintaining its competitive edge in its area of operation. Finally, by reducing data defects and improving systems and personnel efficiency, good quality leads to cost savings and profitability improvement which is the bottom-line for each and every company. With reduced cost of running processes, its anticipated that the revenue of the company will be bolstered. Consequently, it will enable the company to invest much of its profits in increasing the market share by conducting research and development into better ways of improving process and data quality. 2.5 Analyzing quality This far we have looked at what quality is and why it is of importance to a company. The next important question would be how do we determine the level of quality of a subject or object in a company. We saw earlier that quality can be analyzed through subjective or objective assessments. According to Neville Turbit, quality within company projects can be analyzed from either a business perspective or a technical perspective. These are criteria depending on the type of project at hand. Some scholars are also two discuss about two additional ways in a project to analyze quality and that depends a lot on the analyst and how much attention he wishes to give to either. One may analyze end-process or result quality or the project process quality. In route optimization, the two factors go hand in hand and as we shall see later, technical factors have a significant effect on Itellas business. In addition, it will also be relevant not to separate process quality from end-process quality since the deliverables are proportionally linked to each other. Neville goes ahead to list some questions that may arise as we seek to analyze quality within a project. These include: Was the project completed on time? Was the project completed within budget? Did the system meet my needs when it was delivered? Does the system comply with corporate standards for such things as user interface, documentation, naming standards etc.? Is the technology and system stable and maintainable? Is the system well engineered so that it is robust and maintainable? An analysis for data quality in route optimization had not being done before this research. It therefore called for careful thought into the methods I was going to use to analyze various data with the aim of giving viable results. The nature and format of data that was to be analyzed was more or less standard. By this I mean that there was not much variation in data formats and fields regardless of the fact that there were multiple information systems in use. Forming associations and picking out discrepancies within the data was done through a process called data mining. 3 Research Strategy 3.1 Data mining Data mining involves the use of sophisticated data analysis tools to discover previously unknown, valid patterns and relationships in large data sets which may be in quantitative, textual or multimedia form, Jeffrey W. SeifertData Mining:An Overview. Jiawei Han and Micheline Kamber give a more understandable or laymans definition of what data mining is. Simply stated, it refers to extracting or mining knowledge from large amounts of data Data mining concepts and techniques, pg 5 Data miners have over the years used a wide array of parameters to study data. These include Association: patterns where one field in data is connected to another field Sequence or path analysis: patterns where one event leads to another event Classification: identification of new patterns, such as relationships between different fields in the same data Clustering: finding and visually documenting groups of previously unknown facts, such as Geographic location and brand preferences Forecasting: discovering patterns from which one can make reasonable predictions regarding future activities Ref: data mining: an overview However, in addition to getting results from data mining processes, it was vital to have an analysis tool that would enable us to have a clear picture of the quality standard of the analyzed data. To ensure that the analysis of data quality in this research is not only effective but also efficient, data mining has to go hand in hand with a tool that sets quality standards. Over the years many a companies have used total quality management tools which have further being developed into a more vigorous analysis tool known as six sigma. Companies such as motorolla and General motors have proven track records of six sigmas success having saved billions of dollars since over a few years. Itella is a company that is striving to achieve efficiency, reduce process wastage while maintain profitability in a market facing aggressive competition from technological advancements especially in the telecommunications industry. With this in mind, I deemed it fit to use six sigma as an appropriate qual ity tool for this project. So what then is six sigma? 3.2 Six sigma In his book, Mcgraw Hill defines six sigma as a highly technical method, used by engineers and statisticians, to fine-tune products and processes in an aim to position a company for greater customer satisfaction, profitability, and competitiveness. From previous training on six sigma methods, I would say that six sigma is not a single entity but rather a collection of various process and quality analysis tools guided under the six sigma methods. Quality analysis tools include flowcharts, check sheets, Pareto diagrams, cause and effect diagrams, histograms, scatter diagrams, and control charts. Thomas Pyzdek, the Author ofThe Six Sigma Handbook states that for six sigma to make sense, the term quality has to be viewed from two perspectives; potential quality and actual quality. Potential quality is the maximum result achievable from a process while actual quality is the current result achieved from the same process. The gap between these two perspectives is what we term as bad quality / failures or defects. Essentially, the main goal of six sigma is to reduce variations within processes as much as possible. The table below shows the perceived levels of sigma in relation to the number of faults in those levels in a sample of one million instances or opportunities. The less the number of faults within a process, the more the efficient the process is. Many companies have being misled to believe that since the variation between 4th to 6th sigma seems to be very small(within 1%), it is relatively ok if their processes are at 4th sigma. However, as we shall see later on in this paper, the variation in costs for processes falling within various sigma levels is quite significant. Not included in the above table are the other levels of sigma, since sigma is calculated to the 100th point at each level. This makes it more accurate when running bigger analysis. Expenses incurred in correction and elimination of errors, in order to achieve a certain sigma level, are known as costs of poor or bad quality. Six sigma should be well understood and adapted more or less as a company lifestyle for it to achieve its purpose. With reference to previous research, Thomas goes ahead to describe how various levels of sigma affect a company. For companies without six-sigma processes, they incur ridiculously high costs due to bad quality. Companies operating at three or four sigma spend between 25 to 40 percent of their budget revenues fixing problems while those operating at Six Sigma spend less than 5 percent of their revenues fixing problems within processes. The cost of poor quality as compared to six sigma is illustrated in the FIGURE 4 below. 4 Understanding Route Optimization 4.1 Early Morning Delivery (EMD) Systems In EMD, there are different information systems that carry out various functions relating to processes within the department. The main ones are Jakti and Lehtinet. Earlier, when routes were measured manually, the two above were the main and only information systems in use. Additional systems were taken into use with the inception of the Route optimization project. These are Webmap and Routesmart. We shall take a brief look at these systems and their functions as we try to get an understanding of the basics of this research. 4.1.1 Jakti JakTi is an SQL database that holds workspaces with address and route information. There are 2 versions of Jakti, namely A and B. JakTi B is the download manager for workspaces from the main database to a work station, which is a desktop computer or laptop. Jakti A, is the editor tool for the workspace already downloaded onto the workstation. In Jakti A one can create new routes, delete existing ones, add and delete addresses or move them to various routes and also add additional data to the routes. The additional data mentioned above that are added in Jakti A are mostly route parameters used in calculation of route delivery times. These include apartment buildings floor and elevator information, exceptional yard distances and delivery mode. Exceptional yard distances are distances to delivery points which are located within private yards. This is the distance between the point where a deliverer will park their car to the point of actual delivery. There are 3 main delivery modes used by Itella in EMD, namely; delivery by company cars (right handed cars), delivery by private car (left handed cars) and delivery by bike. 4.1.2 Lehtinet Lehtinet is an information system that imports address and route information from Jakti and matches them to newspaper subscriptions from Newspaper Publishers. However the matching is not always at 100% and some errors occur. The 3 common errors in lehtinet are mentioned below. 4.1.2.1 Route number errors These are mostly in areas with new addresses or in areas which have had no EMD before. Since matching data is imported from Jakti, if there is some missing data in Jakti, then there will not be a match in Lehtinet. However if the publishers have correct route information, the subscription will be allocated to the correct route. 4.1.2.2 Address errors Newspaper Publishers could have different address databases than Itella has which results in a conflict when matching the addresses. These errors are mostly misplaced characters within the address or wrongly spelt addresses. And just as in the above case, subscriptions will be allocated to the correct routes, but matching information will be wrong. 4.1.3 Webmap Wikipedia defines Webmap as a standard protocol for serving georeferenced map images over the Internet that are generated by a map server using data from a GIS database. It continues to define A GIS as a system that captures, stores, analyzes, manages and presents data with reference to geographic location data. Simply put, Webmap is a tool used to edit visualized address data by a process called geocoding. With reference to our case, Webmap imports address data from Jakti and presents it as visual data on a map interface as shown by the blue dots in FIGURE 5. A user places the visualized addresses, guided by features on the map interface. This process is known as geocoding. Webmap uses already existing workspaces from the Jakti database. Once a point on webmap has being geocoded, it receives co-ordinates under the KKJ-coordinate system which is used in Finland. After all the addresses have being geocoded, the workspace is returned to the main database, where this co-ordinate information is then stored in Jakti. 4.1.4 Routesmart. RouteSmart is a tool that puts together data from JakTi, Webmap and Lehtinet, and uses variable parameters and functions to calculate routes as defined by a user. There is also one more set of crucial data needed to calculate routes that hasnt being mentioned under the information systems listed above. These are the distances between delivery points which are represented by a detailed set of street networks. Routesmart is also the tool used to visualize and edit street segments and networks. Webmap is just a map interface with non-editable layers that have outlines of street networks which guide a user in geocoding. However these are just the main streets as would be seen on any map interface. Along with these streets on webmap, are also vector lines that are supposed to give a more accurate geographical position of the streets. Due to the parameters agreed upon within Itella and the variation in modes of delivery, the street networks actually used are much more detailed and classified. They are categorized as listed below: CAR: visualized as red lines and these are streets usable by cars WALK-CAR (WC): visualized as blue lines and are connected to car street segments. These are not usable by cars but on foot from the point to where the. WALK-WALK(WW):visualized as green lines STAIRS: visualized as orange lines 4.1.5 Integration of EMD systems The relationship between JakTi, Lehtinet, Webmap and routesmart is illustrated in the picture below. Information that is relayed between theses systems is numbered with letters which are listed below. These systems discussed above hold what would be the foundation data for eventual route planning. As a summary, the data mentioned above is listed: Address and route information Apartment building information Elevator information Yard exception distances Newspaper subscriptions Co-ordinate information Street networks All the above data except for street networks are directly linked to delivery points. An address is attached to what we call a delivery point, which is a term also used to refer to visualized addresses on either Webmap or RouteSmart. Addresses in the same delivery location for example an apartment building with several apartments, are put under the same group. In reference to delivery points, I shall use two terms to differentiate between grouped addresses and single addresses, service location and service point. A service location is the individual delivery point in a group while a service point is the whole group. However, data in these systems is not compatible and for RouteSmart to be able to handle it, it needs to be put together in a uniform format understandable by the system. This is done by means of an ETL download. 4.2 ETL download. ETL stands for extract, transform, and load. By running an ETL download, relevant data that we have looked at earlier is extracted from JakTi, lehtinet and webmap, transformed into a format recognizable by Routesmart then downloaded onto Routesmart for manipulation. Information on an ETL file relates mainly to delivery points. There are several column fields on an ETL download which represent various parameters drawn from relevant systems. 4.2.1 ETL Download Fields Part of the analysis done in this research was derived from analysis of the ETL file. It is therefore important that the major fields or columns that have relevance to the analysis be explained in a little more detail. 4.2.1.1 Jakti ID This column, column A below, has a number that identifies a delivery point in JakTi. This number should be unique to every delivery point. It is automatically generated and follows the common upper-bit/lower-bit binary numbers. In the early days when EMD started using Jakti, this system was already being used by daily mail which uses the same data base of addresses. To avoid large amounts of work, many of the addresses were copied from daily mail to EMD workspaces. What this meant is that many of the delivery points shared the same attributes. It is therefore possible that one address, although in different workspaces could be sharing the japi ID. 4.2.1.2 Jakti Object Id and Jakti element ID A Jakti object is a subgroup is Jakti while an element is a larger group that holds the smaller subgroups. The Ids in this case refer to the unique numbers each object and element have. A subgroup for example would be an apartment building with many delivery points. These delivery points are grouped together under a Jakti Object. 4.2.1.3 Jakti internal sequence The simplest way to define Jakti internal sequence would be to give an example of what it is. Every delivery object begins with jakti internal sequence nuber 1. This sequence runs sequentially until the next object. In other words this is a numbering system for delivery points in the same group. 4.2.1.4 Extra Distance There are certain parameters used in Itella that are used to calculate various elements in route planning. Extra distance is one of the results from those parameters. Extra distance is any additional distance within an apartment building, covered by walking during delivery. 4.2.1.5 Route number and route Sequence Route number is basically a number that is unique to planned areas of delivery. Route sequence in an ETL file is a sequential numbering of the delivery points according to their delivery order. 4.2.1.6 X and Y co-ordinates X and Y co-ordinate information is data that comes from webmap. Once a delivery point has being made in Jakti and has being transferred to webmap, the point is geocoded after which it takes the co-ordinate acquires the co-ordinates for the location that it has being placed on, in a webmaps map interface. This information is stored in Jaktis main database and during and after an ETL download, it is shown in the columns below. The co-ordinate system used here is KKJ, which is what has being used across finland since the late 1960s. KKJ is a 2-dimensional co-ordinate system and so the height of different areas is not depicted in an ETL file. 4.1.2.7 Floor and elevator The information found in these two columns relates to apartment buildings that normally have more than one storey. On the column marked floors, the number of floors in a storey apartment building are marked numerically. Only the first delivery point in a group of addresses in an apartment building will be marked with the number of floors in that apartment. The elevator column is only marked by 1s and 0s. 1 represents a building that has an elevator while 0 represents a bulding without an elevator. 5. Analysis Elements At the beginning of the research, it was agreed that I would focus on 4 main areas that had already undergone at least one optimization phase. These areas or regions were Jyvskyl, Vaasa, Nurmijrvi and Salo. The fact that these regions also had a diverse scope of delivery types and data also facilitated the decision to focus on them. Having undergone one optimization process already meant it would be easier to make comparisons on variations from data initially used to the current data. In between these phases of optimization there had being large resource investments to correct quality issues as well as prepare data for the next phase of optimization. Therefore through the analysis we would be able to conclusively state whether the resources used had returned value through improved data quality. The amount of data to be handled in these four regions was quite large. I therefore decided to use a standard sample from all regions that would reflect a near realistic picture of the situation especially for analysis procedures done using routesmart. I used urban boundaries which were already prebuilt within the system. These are the same urban boundaries used by geographers and statisticians in Finland. Finlands Statistics center defines an urban settlement as an area with residential buildings separated by at most 200 meters apart and with at least 200 inhabitants. Anything else falling outside this is considered as a rural area. In selection of data in urban areas, I increased the boundary by 10 metres to accommodate for service locations whose buildings might be physically within the boundary but whose delivery point is a few meters off the boundary. All in all, there were three data and process analysis elements that this research dwelt on. In addition, there was a financial analysis done from the results derived which was more or less an icing to the cake. These are: Delivery point /service location data in relation to street networks Proximity of service locations to streets in relation to prescribed guidelines Service locations connected to the wrong streets Additional information Geocoded / webmap data vs. jakti data. 5.1 Service locations vs. street networks There are set guidelines that have being given in handling optimization data. This was a good starting point to analyze quality since it was possible to document variations in the data in comparison to the prescribed guidelines. Route optimization guidelines state that in urban areas, all service locations must be within proximity of 10 meters from connectable street networks for optimum results during route calculation. As we saw earlier, there are four categories of street networks. However; service locations can only be connected to two of these; CAR and WALK-CAR. From all the four regions, proximity of service points (delivery groups) from the streets was calculated at intervals of 5 meters, ranging from 10 meters to 60 meters. This analysis was done on routesmart using selection tools. Considering that all the regions had undergone at least one optimization, the analysis was done for both the data used in the first optimization and the current data. If street networks are properly digitized, the ideal result should show that more service points fell within the required proximity. Below is a sample of the table used to collect the results: The idea was to get the percentage number of service points in relation to the total number in that regions urban area, falling under different proximities. The results were then translated onto a graph which also showed the different sigma levels. 5.1.1 Street Networks analysis results All regions generally fell under 3rd level sigma when considering the required proximity of 10 meters. There was however significant improvements in Vaasa since more service points in the new data fell within the requirements as compared to the old data. There was not so much change in Salo while there was a slight decrease in the level of analyzed quality in Jyvskyl. Unfortunately in Nurmijrvi, there was the greatest negative variation in comparison of old and new data. This however should not dim out the fact that out of all the four regions, Nurmijrvi had the highest level of sigma. It is important to note that the R-squared values given for this analysis were relatively high for the sampled data thus the results from this analysis can be effectively used to predict similar terms in other analysis. 5.2 Additional information Additional information in this case refers only to elevator and floor information. The analysis for this element was quite straight forward. The task was to compare data from the regions to data already input into the Jakti. The importance of having all additional information correct is because of the effect that missing data could have on the results from the optimization process. There are separate parameters used to calculate distances and travel times within apartment buildings, and these depend on the availability of floor and elevator information. Comparing the data was the first step. If any discrepancies where found, I tabled them as follows The floor and elevator information in different regions varies a lot so having this reported as for example In Jyvskyl 5 buildings with 4 floors withut elevators, was not a very viable comparison for all the regions. I needed to standardize the discrepancies using the parameters used to calculate apartment building distances and travel times. This allowed me to represent these distance discrepancies as percentages of the total expected distances in that region. The table below shows calculated total distances that should be in the information systems. These totals are calculated from the data that comes from the region vaasa After this, we calculate the total distance represented by the discrepancies found between the ETL file and the regional data as shown below 5.2.1 Additional information analysis results The above steps were carried out for all the regions and below are the compiled results that were gotten from the analysis. Results showed that Vaasa had the least amount of missing data in their databases followed by Salo and Vaasa. From the analysis, Nurmijrvi had the worst result. Jyvskyl is not on the table above because no discrepancies were found in the data. 5.3 Webmap vs. Jakti analysis As discussed earlier in this paper, one of the main methods I used to develop valid analysis results was data mining. There were new relationships and associations between the various fields in an ETL file that I found. There is a direct relationship between Jakti data and geocoded data. In an ideal world, Jakti internal sequence should start at 1 for every identical set of co-ordinates and continue sequentially until the next set of identical co-ordinates. However, there were some discrepancies in the Jakti internal sequence column where the sequence broke off at some point. An example would be a group with sequence starting from 1 to 10 but somewhere in between there appears an odd number that doesnt fall within the sequence. This instance is illustrated in highloghted rows in the diagram below. There are basically two reasons for this occurrence: Error in webmap where the service group has split into two different groups Error in jakti where service locations in different service groups have being joined in webmap. Whichever the case, it was not easy to pick it out directly from the ETL file but these discrepancies were collectively taken under this analysis as an element that represents one dimension of quality. The process of picking out these errors required Excel macros to be made to run this task. There were 3 separate macros made. The functions of these macros were to do the following in this order: Highlight the discrepancies Copy the discrepancies onto a separate worksheet Calculate these discrepancies as percentages of 2 entities As percentages of total service points or groups As percentages of total service locations The difference between 3 (a) and 3 (b) is that in instant A, the single erroneous service locations were individually calculated in relation to the total number of service location in the region. In instant B, the groups that have these erroneous service locations were calculated as percentages of the total number of groups in a region. 5.3.1 Webmap vs. Jakti analysis results The tables above show result from both service groups and service locations. The reason why in Jyvsky and Vaasa, there were bigger percentages for service groups as compared to Nurmijrvi and Salo is because of the difference in type of groups from both regions. In essence this means that some groups have more service locations than others. The above data was then put on a graph and compared to six sigma levels This analysis aimed at mainly showint the level of quality between data that should be in sync in both jakti and webmap. Jyvskyl, Nurmijrvi and Salo had their data falling between the 3rd and 4th sigma levels while Vaasas quality was way below the 3th sigma level. 6. Cost of Quality overview The analysis methods discussed above gave a good picture of different estimated quality errors in mainly street and delivery point data. The question that remained after this was that regardless of the quality issues now, have resources invested and those that are being ivested now, made it worthwhile in improving the process In addition, what is the relation between these perceived costs of quality to costs at various six sigma levels. Efficiency in a process and high quality data means more financial savings within the company. Therefore in my opinion, after all has being said and done, the only measurement to determine the level of quality would eventually be the costs of quality. First n foremost, I needed to gather estimations of used resources in data checks between the first optimization phase and the current situation. This data was available only for work done on delivery points. using unit costs for correction of a single element, that is delivery point or street segment, I calculated the current costs that would be incurred if all the errors were to be elminated. This means my calclation was as follows: ((Number of elements in wrong quality * time taken correct one element)/60) manhours * cost of one man hour. For the street data, I gathered the data on the total number of connectable street segments in the urban area of one region. Due to the high R squared value gotten from the street networks analysis, it was relatively reliable to make proportional comparisons from that analysis directly to streets. This means that I used the percentage gotten from the earlier analysis for service loactions within 10 meters proximity, to get how many street segments would approximately have errors in one workspace. I selected only CAR and WALK-CAR streets, which are the connectable street categories, in urban areas with an allowance on 10 meters from the boundary of the urban area. I went further ahead to calculate the costs for having a particular number of errors for different sigma levels. This analysis gave an insight into the resources, in terms of money, already used, perceived expenses and projections on savings for moving quality to higher sigma levels. 6.1 Cost of quality: streets In the FIGURE 33, the column 10m shows, as per the analysis done above, how much would be the potential costs and work hours invested to check through the streets that are above 10 metres from each region. The first column shows how much the initial budget would be to check all the streets from the regions.The consecutive columns in the FIGURE show how much the costs would be if perceived quality was at the different levels of sigma. FIGURE 34 shows how much would be the perceived savings, as a percentage of the total budget, would be made from each of the regions, if the street quality was at the different levels of sigma shown. This amount is the difference between current perceivable costs of quality to the next level of sigma. From the two FIGUREs above, we can conclusively say that the amount of perceived costs of quality would be somewhat proportional to the size of the region. However, in Salos case, perceived costs are substantially large considering the size of the region. More importantly, the importnace of maintaining quality of streets high is shown by the savings that would be made from a shift to just the next level of sigma. 6.1 Cost of quality: service locations In this analysis, approximations for actual resources used in service location checking for the period between the last optimization phase and now, was available. In the FIGURE 35, the column current quality issues shows required investments to correct current quality issues in the regions. Just like in the previous cost of quality analysis, the consecutive columns in the FIGURE show perceived costs at different levels of sigma. The final column shows how much resources have being currently invested. The financial perspective of the analysis from service locations gave quite some interesting results. Vaasa, the region with the highest amount of invested resources, showed that it needed the highest investment in additional resources to check errors in service locations. Jyvskyl, which is also a relatively large region, also had quite a substantial investment in error checking. This is in comparison to the other two areas, Salo and Nurmijrvi. The current costs to recheck the service lo cations, from these 3 regions are almost the same. However, considering the size of the region, in Salo and Nurmijrvi, these costs are quite high. It is encouraging to notice that service laction quality is at a higher level than street data quality. This however does not dim out the fact that additional savings could be made from making shifts in quality from one sigma level to another. This perceived savings are shown in the FIGURE 36. 7. Summary The current level of quality for data in Route optimization cannot be collectively summarized. The two subjects analyzed in this paper, those are service locations and street networks, are at different levels of quality. Service locations are at a much better level of quality than street networks. When it comes to resources invested, analyses show that although there are slight improvements in the quality, they do not necessarily accrue to the investment made in all regions. Through review of various documents and interviews with various persons, I found out that there are several possible reasons why results have not necessarily being positive. These include: The continuous repetitive routines for data checks develop uninterest through the checking process. This might result in reduced concentration and negligence of some sought on the part of the worker during handling of the task. Current data check methods are time consuming and are prone to error due to their sense of being manual at many steps. Much of the data in route optimization is numerical. This makes it possible to automate at least part of the checking using associations between these numerical fields. Lack of defined standards to follow up these checks, makes it hard not only to track progress but also to define acceptable levels. Setting acceptable levels of quality, reduces the chances of wasted investments through unnecessary processes. Inconsistency in information flow regarding adjustements to how different data checking processes are handled within the project There have however being significant adjustments made during the course of this research that aim at improving the quality of data. These are mostly system related improvements. 8. Limitations to the research I believe that this research gives a good insight on quality levels and standards within the route optimization project. However, the analyses methods and elements used are just small aspects of data as a whole and therefore might create a few gaps and minor inconsistencies. There are many other aspects of route optimization data that would, and should be looked into, in the process of making conclusive quality statements for the project. In addition, in some of the analyses, actual data would aid to develop this research in the future. On the other hand, this work provides solid groundwork and direction into which Itella should undertake to ensure better efficiency in the projects processes. 8. Conclusion In my opinion, the easiest way to move forward from the results derived in this research would be to create a quality plan. David Loshin states that high quality data requires constant vigilance. By introducing data quality monitoring and reporting policies and protocols, the decisions to acquire and integrate data quality technology become much simpler. Assessment and improvement of data quality is something that should be adopted as a continuous process with both short and long-term goals. Analyses of quality should be done from both subjective and objective perspectives to ensure optimum results. Considering the current level of quality for data in Route optimization, the current short term goals would be to get to the next level of sigma. Six sigma uses a, very simple to define, 5-step method to continuously achieve improvements. This method has being abbreviated as DMAIC. This means Define, Measure, Analyze, Improve and Control. This is a process that I believe, by virtue of it having a proven track record, that it would be highly beneficial to Itella. Basically, it means to define the problems and analyses subjects within the project. It means identifying the variables and deliverables within the project and deciding how to analyze their quality. Secondly, continuously measure the performance of the project. This step includes ensuring access the correct data when needed. Third step is to analyze the data using predefined analysis methods. After this, take steps to improve on the problematic areas and finally follow up your process progress. High costs of quality not only reduce profitability, they are also translated directly to the customer. Company processes should not in any way affect their clients. In route optimization, poor data quality has sometimes caused many delays in project implementation and resulted in some unhappy clients. It is also vital that all the employees have an understanding on the companys direction on quality. This plays a big part in understanding why the customer the key focus for the company. With adaptation of six sigma methods and further research into analyzing quality for data in route optimization, I believe that the blissful realm of high quality will eventually come into realization. It takes work and effort to achieve this, but a wise man once said, only by getting stung, does one get to the honey.

Tuesday, May 19, 2020

Sociology - Crime and Deviance Essay - 1682 Words

Crime and Deviance Crime is a set of rules and statutes that regulates the behaviours of a society, it is a behaviour or action that will put members of the public at risk of harm in one way or another be it a robbery or a violent attack. However, deviance is not necessarily breaking the law but it is in violation of the social norms. (Cliff Notes. 2009) But what is classed as criminal or deviant is dependent on certain factors. Crime, or what is perceived as criminal changes over time; what is considered a criminal act now may not have been seen as such in previous years, for example, recreational drugs such as cocaine were not illegal in the late nineteenth century but holds a hefty punishment for possession now. What is deemed to be†¦show more content†¦Hans Brunner’s study backs up the biological theory; he conducted a case study of the male members of a family in Holland each with different violent tendencies. One of which attempted to run his boss over in his car after an argument , another raped his sister and one forced his sister to undress in front of him. They all displayed retarded motor development, difficulties in task planning and awkward sexual behaviours. With further study Brunner found that all of them had a lack of serotonin, which he linked to the violent outbursts they had displayed. (Brunner, H. 1993) The main issue with this study is the sample, not only is it a limited sample but it is culture biased and gender biased as well due to the focus of the study being three Dutch males. This means that the findings cannot be generalised as they are specific to the bias and not large enough generalise wider. The biological theory of crime and deviance is a reliable theory as it can be measured accurately as it’s interest is in the chemical imbalances in the brain rather than what we cannot see or measure like the sociological theory. Another positive point for the theory is it explains the violent outbursts and can be treated to correct the imbalance in the brain with drugs to help adjust the levels of serotonin back to a normalShow MoreRelatedThe Sociology Of Crime And Deviance2467 Words   |  10 PagesThe sociology of crime and deviance is about rules, regulations and rule breakers. There are people that break rules and interest are shown to why they do so, while there are others that are seen and labelled rule breakers. The role of the media is to emphasise this ideas in hyperboles and install reactions to society. Societies today are media saturated and they are captivated with crime and it is the fundamental point of the news production. With less association with people’s lives and valuesRead Moresociology internationalist crime deviance1200 Words   |  5 Pagesï » ¿1c How do interactionists explain crime? (Labelling Theory) Are offenders different? Interactionists argue that a mistake most perspectives make is that they assume lawbreakers are somehow different from law-abiding people. The labelling theory suggests that most people commit deviant and criminal acts but only come are caught and stigmatised for it. It is for this reason that emphasis should be on understanding the reaction and definition of deviance rather than the causes of the initial actRead MoreThe Theoretical Approaches Of Sociology View Crime And Deviance1110 Words   |  5 Pages 17. Discuss the major differences in how the three theoretical approaches of Sociology view crime and deviance. Give examples of specific theories. The functionalist view in relation to deviance is a belief that anyone can be convicted of a crime. Everyone is treated equal in the eyes of law. For example, a celebrity and a homeless man could both be convicted for the same crime. Both would be tried the same, with equal consequences. The Social conflict view is a view in which the elites make theRead MoreSociology and Deviance: in a Society of Saints Crime Will Be Found Discuss2142 Words   |  9 Pagessaints’, without crime, a notion put forward by Emile Durkheim a historical theorist who argued that this concept is unattainable within society. Social control is and has been present in all societies, organized groups, and cultures since the beginning of time. There are many historical and modern perspectives, which help draw conclusions on the study of deviance and social control, two concepts that go hand in hand. In discussing the connection between social control and deviance, it will revealRead MoreLabelling Theory in Explaining Crime and Deviance - A2 Sociology729 Words   |  3 Pagesassess the usefulness of labelling theory in explaining crime and deviance. (21 marks) Labelling theorists are concerned with how and why certain people and actions come to be labelled as criminal or deviant, and what effects this has on those who are labelled as such. As stated in Item A, labelling theory is focused with how individuals construct society based on their interactions with each other. Becker emphasises the significance of crime being a social construct; an action only becomes criminalRead MoreWhat Deviance Is Today s Society Essay1081 Words   |  5 PagesWhat defines deviance in today’s society? Is it the abnormal things that people don’t usually do or is it just criminal behavior? Deviance has brought up many questions on what could be defined as it. Deviance is usually shaped by society. It can be defined as the violation of established contextual, cultural, or social norms, whether folkways, mores, or codified law (OpenStax 142). Sociologist want to see why deviance has so many different areas and how it effects a group in a society. For exampleRead More How Crime and Deviance Can Be Seen As Functional for Society1723 Words   |  7 Pages Crime and deviance are acts that will elicit dissent from society. They take various forms and involve various concepts and theories. It will be the aim of this paper to explore those that are considered to be functional for society. It was Emile Durkheim who first clearly established the logic behind the functional approach to the study of crime and deviance[1] when he wrote The Rules of Sociological Method and The Division of Labour[2]. In those works, DurkheimRead MoreEssay about Social Deviance1286 Words   |  6 PagesSocial Deviance Social deviance is a term that refers to forms of behavior and qualities of persons that others in society devalue and discredit. So what exactly is deviance? In this essay we are concerned with social deviance, not physiological deviations from the expected norm. In general, any behavior that does not conform to social norms is deviance; that is behavior that violates significant social norms and is disapproved of by a large number of people as a result. For societiesRead MoreFunctionalist View on Crime1262 Words   |  5 PagesSociologists who study Crime and Deviance examine cultural norms, how they will or might change over time and how they are enforced. Deviance and social norms vary greatly among different societies, communities and times. Crime is considered an activity which breaks the immediate laws of the society an individual is a part of. The nature that determines whether and act is one of crime or deviance is clearly outlined by a set of formal laws which individuals are expected to follow. Deviance on the other handRead MoreHoward S Becker1318 Words   |  6 PagesHoward Becker SOC 101: Introduction to Sociology Professor Smith March 4, 2012 Howard S. Becker Howard Becker was a famous American sociologist. He made several contributions in the fields of occupations, education, deviance and art and made several studies in those fields. He particularly made several studies in the field of social deviance and occupations. Most of studies went into the interactions between criminal people and regular people. Many of these studies included the criminal