Thursday, November 28, 2019

Winston Churchills Quote Analysis Essays - Democracy,

Winston Churchill's Quote Analysis Winston Churchills Quote on Democracy Many forms of government have been tried and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all wise. Indeed, it has been said that democracy is the worst form of government, except for all the others that have been tried from time to time. - Winston Churchill, 1947. This quote represents the views of Winston Churchill, the former Prime Minister of Great Britain, not only on democracy, but on government as a whole. By this quote Churchill meant that democracy is not perfect, and no government created so far is. Every form of government no matter how successful it is has flaws. According to Websters Dictionary the definition of government is: 1 : a government by the people; esp : rule of the majority 2 : a government in which the supreme power is held by the people. Democracy cannot function without the people, especially if the people are ignorant, ill- informed, or only care about their own interests. Building an effective democracy takes time, the people must be educated to make effective and well-informed decisions. This is one reason why democracy is failing in Russia, and one of democracys flaws. Democracy is also a very slow process; the checks and balances that help make democracy effective also makes it inefficient. Before a law is passed it spends a great deal of time in the hands of officials in Congress, and even in the hands of the president. Democracy cannot exist as a permanent form of government. It can only exist until a majority of voters discover that they can vote themselves largesse out of the public treasury.- Alexander Tyler. While this quote is not completely realistic, the idea behind it is very true. It is often said that democracy is just a tyranny of the majority. This can seem true at times, because the thoughts and ideas of a minority are not always fairly represented. In fact in a few ways a dictatorship has advantages over democracy its more efficient. A dictatorship is a very efficient form of government, and so is a oligarchy, but in those governments it can be very difficult for the views and opinions of the people to be heard. For these reasons democracy is the most just system of government thus far. Democracy works because even though the leaders of a nation are not always the wisest people, they know that they can loose their power just as quickly as they gained it if the people do not approve of what theyre doing. A Democratic society is not in any way utopian and no one pretends it is, but no society yet has been perfect, and that is thought behind Winston Churchills quote. Basically Churchill means that while democracy isnt perfect, nothing is and so far its the best form of government we have. Government

Sunday, November 24, 2019

World War II Fighter Grumman F6F Hellcat

World War II Fighter Grumman F6F Hellcat Having begun production of their successful F4F Wildcat fighter, Grumman began work on a successor aircraft in the months before the Japanese attack on Pearl Harbor. In creating the new fighter, Leroy Grumman and his chief engineers, Leon Swirbul and Bill Schwendler, sought to improve upon their previous creation by designing an aircraft which was more powerful with better performance. The result was a preliminary design for an entirely new aircraft rather than an enlarged F4F. Interested in a follow-on aircraft to the F4F, the US Navy signed a contract for a prototype on June 30, 1941. With the US entry into World War II in December 1941, Grumman began utilizing data from the F4Fs early combats against the Japanese. By assessing the Wildcats performance against the Mitsubishi A6M Zero, Grumman was able to design its new aircraft to better counter the nimble enemy fighter. To aid in this process, the company also consulted noted combat veterans such as Lieutenant Commander Butch OHare who provided insight based on his firsthand experiences in the Pacific. The initial prototype, designated XF6F-1, was intended to be powered by the Wright R-2600 Cyclone (1,700 hp), however, information from testing and the Pacific led it to be given the more powerful 2,000 hp Pratt Whitney R-2800 Double Wasp turning a three-bladed Hamilton Standard propeller. A Cyclone-powered F6F first flew on June 26, 1942, while the first Double Wasp-equipped aircraft (XF6F-3) followed on July 30. In early trials, the latter showed a 25% improvement in performance. Though somewhat similar in appearance to the F4F, the new F6F Hellcat was much larger with a low-mounted wing and higher cockpit to improve visibility. Armed with six .50 cal. M2 Browning machine guns, the aircraft was intended to be highly durable and possessed a wealth of armor to protect the pilot and vital parts of the engine as well as self-sealing fuel tanks. Other changes from the F4F included powered, retractable landing gear which had a wide stance to improve the aircrafts landing characteristics. Production and Variants Moving into production with the F6F-3 in late 1942, Grumman quickly showed that the new fighter was easy to build. Employing around 20,000 workers, Grummans plants began to produce Hellcats at a rapid rate. When Hellcat production ended in November 1945, a total of 12,275 F6Fs had been built. During the course of production, a new variant, the F6F-5, was developed with production commencing in April 1944. This possessed a more powerful R-2800-10W engine, a more streamlined cowling, and numerous other upgrades including a flat armored-glass front panel, spring-loaded control tabs, and a reinforced tail section. The aircraft was also modified for use as the F6F-3/5N night fighter. This variant carried the AN/APS-4 radar in a fairing built into the starboard wing. Pioneering naval night fighting, F6F-3Ns claimed their first victories in November 1943. With the arrival of the F6F-5 in 1944, a night fighter variant was developed from the type. Employing the same AN/APS-4 radar system as the F6F-3N, the F6F-5N also saw some changes to the aircrafts armament with some replacing the inboard .50 cal machine guns with a pair of 20 mm cannon. In addition to the night fighter variants, some F6F-5s were fitted with camera equipment to serve as reconnaissance aircraft (F6F-5P).​ Handling Versus the Zero Largely intended for defeating the A6M Zero, the F6F Hellcat proved faster at all altitudes with a slightly better climb rate over 14,000 ft, as well as was a superior diver. Though the American aircraft could roll faster at high speeds, the Zero could out-turn the Hellcat at lower speeds as well as could climb faster at lower altitudes. In combating the Zero, American pilots were advised to avoid dogfights and to utilize their superior power and high-speed performance. As with the earlier F4F, the Hellcat proved capable of sustaining a great deal more damage than its Japanese counterpart. Operational History Reaching operational readiness in February 1943, the first F6F-3s were assigned to VF-9 aboard USS Essex (CV-9). The F6F first saw combat on August 31, 1943, during an attack on Marcus Island. It scored its first kill the next day when Lieutenant (jg) Dick Loesch and Ensign A.W. Nyquist from USS Independence (CVL-22) downed a Kawanishi H8K Emily flying boat. On October 5-6, the F6F saw its first major combat during a raid on Wake Island. In the engagement, the Hellcat quickly proved superior to the Zero. Similar results were produced in November during attacks against Rabaul and in support of the invasion of Tarawa. In the latter fight, the type claimed 30 Zeros downed for the loss of one Hellcat. From late 1943 forward, the F6F saw action during every major campaign of the Pacific war. Quickly becoming the backbone of the US Navys fighter force, the F6F achieved one of its best days during the Battle of the Philippine Sea on June 19, 1944. Dubbed the Great Marianas Turkey Shoot, the battle saw US Navy fighters down massive numbers of Japanese aircraft while sustaining minimal losses. In the final months of the war, the Kawanishi N1K George proved a more formidable opponent for the F6F but it was not produced in significant enough numbers to mount a meaningful challenge to the Hellcats dominance. During the course of World War II, 305 Hellcat pilots became aces, including US Navy top scorer Captain David McCampbell (34 kills). Downing seven enemy aircraft on June 19, he added nine more on October 24. For these feats, he was awarded the Medal of Honor. During its service in World War II, the F6F Hellcat became the most successful naval fighter of all time with a total of 5,271 kills. Of these, 5,163 were scored by US Navy and US Marine Corps pilots against a loss of 270 Hellcats. This resulted in a remarkable kill ratio of 19:1. Designed as a Zero Killer, the F6F maintained a kill ratio of 13:1 against the Japanese fighter. Assisted during the war by the distinctive Chance Vought F4U Corsair, the two formed a lethal duo. With the end of the war, the Hellcat was phased out of service as the new F8F Bearcat began to arrive. Other Operators During the war, the Royal Navy received a number of Hellcats through Lend-Lease. Initially known as the Gannet Mark I, the type saw action with Fleet Air Arm squadrons in Norway, the Mediterranean, and the Pacific. During the conflict, British Hellcats downed 52 enemy aircraft. In combat over Europe, it was found to be on par with the German Messerschmitt Bf 109 and Focke-Wulf Fw 190. In the postwar years, the F6F remained in a number of second-line duties with the US Navy and was also flown by the French and Uruguayan navies. The latter used the aircraft up until the early 1960s. F6F-5 Hellcat Specifications General Length:  33 ft. 7 in. Wingspan:  42 ft. 10 in.Height:  13 ft. 1 in.Wing Area:  334 sq. ft.Empty Weight:  9,238 lbs.Loaded Weight:  12,598 lbs.Maximum Takeoff Weight:  15,514 lbs.Crew:  1 Performance Maximum Speed:  380 mphCombat Radius:  945 milesRate of Climb:  3,500 ft./min.Service Ceiling:  37,300 ft.Power Plant:  1Ãâ€" Pratt Whitney R-2800-10W Double Wasp engine with a two-speed two-stage supercharger, 2,000 hp Armament 6Ãâ€" 0.50 cal. M2 Browning machine guns6 Ãâ€" 5 in (127 mm) HVARs or 2 Ãâ€" 11 ¾ in Tiny Tim unguided rocketsup to 2,000 lbs. of bombs Sources World War II Database: F6F HellcatAce Pilots: F6F HellcatMilitary Factory: F6F Hellcat

Thursday, November 21, 2019

Business Case Study Review Essay Example | Topics and Well Written Essays - 1000 words

Business Case Study Review - Essay Example -Hello- I am not asking for anything different than what is in the attached insturctions. All steps 1-5 were included in request where an action plan is step 5. Please compelte this and 2 days is fine I will fix the rest. Â   The initial instruction: To read the case study and apply all of the steps as indicated on my attachment. You can use bullets but make sure all steps are answered for the McDonalds "Seniors" Case Study I did answer all the steps 1-4 in bullet form for that is what you requested. Step 5 cannot be answered in bullet form so I assumed it was not part of the task. Furthermore your order consists of 4 pages only at 250 words a page. That would amount to a mere 1000 words. With this bullet answers, we have already covered 1498 words. We have exceeded the word limit by 498 words already. you are not obliged to do any extra work for free. If you have completed the paper up to the initial instructions, please upload the same file to the order page and advise the Customer to place a new order. Or let us know for mow many extra pages we should charge him, if the additional work pertains to this paper 5. No documents have been forwarded. All information is from observations of the Manager. Data on population of the town: number and ages- can be obtained from national statistics office if the town; data on average number of elderly that stay up to 3pm or mid afternoon can be recorded by the door guard. All information is from observations of the Manager hence it is a first hand account and very reliable. The source definitely has an interest in the case since she is the manager and it is her responsibility. 8. The problem is urgent because the number of elderly staying for prolonged periods is increasing. The impact of other possible effects relative to this cannot be estimated at this time but immediate action is better than a singed reputation. 9. The stakes are high. If the manager is unable to find a

Wednesday, November 20, 2019

Personal reflection Coursework Example | Topics and Well Written Essays - 750 words

Personal reflection - Coursework Example Through the lesson, I also learnt how to quantify risk and profitability. I finished the lecture by learning how to mitigate risks, especially the risks associated with projects. In my week five task, I led my group through the group project we are undertaking. My key role at this stage was to allocate task. I assigned Nicole, Emma and Cindy each to design five posters. I allocated Cindy a time schedule to do and write notes about the meeting we had. I assigned to Emma the task of taking photos about the park. I gave Nicole the task of correcting mistakes in the posters and Amber the collection of documents and writing of the final report. Attending the lecture on risks and projects was a new and exciting experience. Since I have never attended a lecture on the topic, I felt a bit nervous and anxious, as I did not have defined expectations and was not sure if I would understand the lecture and grasp the concepts. The lecturer made me feel a bit comfortable at the beginning since he had looked strict and introduced the topic in a manner that made it look very challenging. I was more than curious to know how projects and risk assessment are linked. The leadership role that I took towards our group project in week five made me nervous at the beginning of the week. I did not know how I would relate with my colleagues whom I were very fond of as their leader. The thought of how I would deal with them in incidences of non-compliance made me feel uneasy with the role. As time passed, I became more comfortable with the role because of my groups corporation. The experience was very influential and I was delighted that things turned out successfully. I had a wonderful experience from the lecture on risks and projects. The lecture enabled me to have an understanding of risks and projects. I managed to clearly differentiate between a risk and a hazard. I was in a position to assess the

Monday, November 18, 2019

Book report on The stranger by Albert Camus Essay

Book report on The stranger by Albert Camus - Essay Example Albert Camus is considered to the French author of the Modernism era, and he is also famous by its philosophical views and journalistic articles. The philosophy of absurd has remained to be his notable contribution in the field of literature of that period (McCarthy, 5-6). This man has an individual and extraordinary world perception that he effuses in his creations. â€Å"The Stranger† has been first published in the year 1942 (McCarthy, 1). Albert Camus has had 29 years old, and that was a period of war and devastation. His father has been killed in the whirlpool of events that happened in the previous war (McCarthy, 3). That is why we may lead to personal and historical references while reading the book. Camus represents his own pattern of live vision that may seem ridiculous and irritating for the society. Still, this pattern may be understood from different perspective that author suggest us to query. The setting of the novel is the French colony of Algiers and the time is introduced the period before World War II. â€Å"The Stranger† represents two parts of the story that are thematically and logically divided. The main character is named Meursault. There are a few secondary characters in the story which help to develop the plot. The slant on the novel is tragic with the philosophical implications. The atmosphere is gloomy and dull with calm and confident inclusions. Novel â€Å"The Stranger† suggests to the audience the story about allegedly ordinary person Meursault who remains to be the recluse man with small needs and paltry subsistence. He is engaged in boring work, and lives in a small and dirty room. Physical necessities form more important value for Meursault than some entire personal feelings and moral dogmas. This man is viewed as a protagonist of the story. We also may regard him as anti-hero of the novel due to some perspective. Camus involves the readers into contradictory

Friday, November 15, 2019

Load Balancing as an Optimization Problem: GSO Solution

Load Balancing as an Optimization Problem: GSO Solution METHODOLOGY INTRODUCTION In this chapter, we presented a novel methodology which considers load balancing as an optimization problem. A stochastic approach, Glowworm swarm optimization (GSO) is employed to solve the above mentioned optimization problem. In the proposed method, excellent features of various existing load balancing algorithms as discussed chapter 2 are also integrated. PROPOSED METHODOLOGY There are numerous cloud computing categories. This work mainly focuses on a public cloud. A public cloud is based on the typical cloud computing model, and its services provided by service provider [42]. A public cloud will comprises of several nodes and the nodes are in different physical locations. Cloud is partitioned to manage this large cloud. A cloud consists of several cloud partition with each partition having its own load balancer and there is a main controller which manage all these partition. 3.2.1 Job Assignment Strategy Algorithm for assigning the jobs to cloud partition as shown in Fig. 2 Step 1: jobs arrive at the main controller Step 2: choosing the cloud partition Step 3: if cloud partition state is idle or normal state then Step 4: jobs arrive at the cloud partition balancer. Step 5: assigning the jobs to particular nodes based on the strategy. Figure 3.1: Flowchart of Proposed Job Assignment Strategy. Load Balancing Strategy In cloud, Load Balancing is a technique to allocate workload over one or more servers, network boundary, hard drives, or other total resources. Representative datacenter implementations depends on massive, significant computing hardware and network communications, which are subject to the common risks linked with any physical device, including hardware failure, power interruptions and resource limits in case of high demand. High-quality of load balance will increase the performance of the entire cloud.Though, there is no general procedure that can work in all possible different conditions. There are several method have been employed to solve existing problem. Each specific method has its merit in a specific area but not in all circumstances. Hence, proposed model combines various methods and interchanges between appropriate load balance methods as per system status. Here, the idle status uses an Fuzzy Logic while the normal status uses a global swarm optimization based load balancing strategy. Load Balancing using Fuzzy Logic When the status of cloud partition is idle, several computing resources are free and comparatively few jobs are receiving. In these circumstances, this cloud partition has the capability to process jobs as fast as possible so an effortless load balancing method can be used. Zadeh [12] proposed a fuzzy set theory in which the set boundaries were not precisely defined, but in fact boundaries were gradational. Such a set is characterized by continuum of grades of membership function which allocates to each object a membership grade ranging from zero to one [12]. A new load balancing algorithm based on Fuzzy Logic in Virtualized environment of cloud computing is implemented to achieve better processing and response time. The load balancing algorithm is implemented before it outstretch the processing servers the job is programmed based on various input parameters like assigned load of Virtual Machine (VM) and processor speed. It contains the information in each Virtual machine (VM) and numbers of request currently assigned to VM of the system. Therefore, It recognize the least loaded machine, when a user request come to process its job then it identified the first least loaded machine and process user request but in case of more than one least loaded machine available, In that case, we tried to implement the new Fuzzy logic based load balancing technique, where the fuzzy logic is very natural like human language by which we can formulate the load balancing problem. The fuzzification process is carried out by fuzzifier that transforms two types of input data like assigned load and processor speed of Virtual Machine (VM) and one output as balanced load which are required in the inference system shown in figure 3.2, figure 3.3 and figure 3.4 respectively. By evaluating the load and processor speed in virtual machine in our proposed work like two input parameters to produce the better value to equalize the load in cloud environment, fuzzy logic is used. These parameters are taken for inputs to the fuzzifier, which are needed to estimate the balanced load as output as shown in figure 3.4. Figure 3.2: Membership input function of Processor Speed Figure 3.3: Membership input function of Assigned Load Figure 3.3: Membership output function of Balanced Load To affiliate the outputs of the inferential rules [13] , low-high inference method is employed. A number of IF-THEN rules are determined by making use of the rule-based fuzzy logic to get the output response with given input conditions, here the rule is comprised from a set of semantic control rules and the supporting control objectives in the system. If (processor_speed is low) and (assigned_load is least) then (balanced_load is medium) If (processor_speed is low) and (assigned_load is medium) then (balanced_load is low) If (processor_speed is low) and (assigned_load is high) then (balanced_load is low) If (processor_speed is Medium) and (assigned_load is least) then (balanced_load is high) If (processor_speed is Medium) and (assigned_load is medium) then (balanced_load is medium) If (processor_speed is Medium) and (assigned_load is high) then (balanced_load is low) If (processor_speed is high) and (assigned_load is least) then (balanced_load is high) If (processor_speed is high) and (assigned_load is medium) then (balanced_load is medium) If (processor_speed is high) and (assigned_load is high) then (balanced_load is medium) If (processor_speed is very_high) and (assigned_load is least) then (balanced_load is high) If (processor_speed is very_high) and (assigned_load is medium) then (balanced_load is high) If (processor_speed is very_high) and (assigned_load is high) then (balanced_load is medium) As shown above, there are 12 potential logical output response conclusions in our proposed work. The Defuzzification is the method of changing fuzzy output set into a single value and the smallest of minimum (SOM) procedure is employed for the defuzzification. The total sum of a fuzzy set comprises a range of output values that are defuzzified in order to decode a single output value. Defuzzifier embraces the accumulated semantic values from the latent fuzzy control action and produces a non-fuzzy control output, which enacts the balanced load associated to load conditions. The defuzzification process is used to evaluate the membership function for the accumulated output. The algorithm-1 is defined to manage the load in Virtual machine of cloud computing as follows: Begin Request_to_resource() L1 If (resource free) Begin Estimate connection_string() Select fuzzy_rulebase() Return resource End Else Begin If (Anymore resource found) Select_next_resource() Go to L1 Else Exit End End The proposed algorithm starts with request a connection to resource. It tests for availability of resource. It Calculate the connection strength if the resource found. Then select the connection, which is used to access the resource as per processor speed and load in virtual machine using fuzzy logic. Load Balancing using GSO (Glowworm Swarm Optimization) When the status of cloud partition is normal, tasks arrives with faster rate compare to idle state and the condition becomes more complex, thus a novel strategy is deployed for load balancing. Each user desired his job in the shortest time; as a result the public cloud requires a strategy that can finish the job of all users with adequate response. In this optimization algorithm, each glowworm i is distributed in the objective function definition space [14]. These glowworms transfer own luciferin values and have the respective scope called local-decision range . As the glow searches in the local-decision range for the neighbor set, in the neighbor set, glow attracted to the neighbor with brightest glow. That is glow selects neighbor whose luciferin value greater than its own, and the flight direction will change each time different will change with change in selected neighbor. Each glowworm encodes the object function value at its current location into luciferin value and advertises the same within its neighborhood. The neighbor’s set of glowworm comprises of those glowworms that have comparatively a higher luciferin value and that are situated within a dynamic decision range and their movements are updated by equation (8) at each iteration. Local-decision range update: (8) and is the glowworm local-decision range at the iteration, is the sensor range, is the neighbourhood threshold, the parameter generates the rate of change of the neighborhood range. Local-decision range consist of the following number of glow: (9) and, is the glowworm position at the t iteration, is the glowworm luciferin at the iteration.; the set of neighbours of glowworm comprises of those glowworms that have a comparatively higher luciferin value and that are situated within a dynamic decision range whose range is defined above by a circular sensor range Each glowworm as given in equation (10), i elects a neighbor j with a probability and process toward it as: Probability distribution used to select a neighbor: (10) Movement update: (11) Luciferin-update: (12) and is a luciferin value of glowworm at each iteration, leads to the reflection of the accumulative goodness of the path . This path is followed by the glowworms in their ongoing luciferin values, the parameter only ascends the function fitness values, is the value of test function. In this optimization algorithm, each glowworm is distributed in the objective function definition space [43]. These glowworms transfer own luciferin values and have the respective scope called local-decision range . As the glow searches in the local-decision range for the neighbor set, in the neighbor set, glow attracted to the neighbor with brightest glow. That is glow selects neighbor whose luciferin value greater than its own, and the flight direction will change each time different will change with change in selected neighbor. Figure 3.4 shows the flowchart of GSO algorithm. In the context of load balancing for cloud computing GSO algorithm check the status of the server simultaneously if it is free. For example a user wants to download a file size of 50 MB. It checks by iteration if user gets entered in server, it gets the message as achieve target. Figure 3.4: Flowchart of GSO Analysis of the Accrual Anomaly | Accounting Dissertation Analysis of the Accrual Anomaly | Accounting Dissertation Sloan (1996), in a determinative paper, added the accrual anomaly in the list of the market imperfections. Since then, academics have focused on the empirical investigation of the anomaly and the connection it has with other misspricing phenomena. The accrual anomaly is still at an embryonic stage and further research is needed to confirm the profitability of an accruals based strategy net of transaction costs. The current study investigates the accrual anomaly while taking into consideration a UK sample from 1991 to 2008. In addition, the predictive power of the Fama and French (1996) factors HML and SMB is being tested along with the industrial production growth, the dividend yield and the term structure of the interest rates. Chapter 1 Introduction Since the introduction of the random walk theory which formed the basis for the evolvement of the Efficient Market Hypothesis (EMH hereafter) proposed by Fama (1965), the financial literature has made many advances but a piece of the puzzle that is still missing is whether the EMH holds. Undoubtedly, the aforementioned debate can be considered as one of the most fruitful and fast progressing financial debates of the last two decades. The Efficient Market Hypothesis has met many challenges regardless of which of its three forms are being investigated. However, the weak form and semi strong hypothesis have been the most controversial. A literally vast collection of academic papers discuss, explore and argue for phenomena that seem to reject that the financial markets are efficient. The famous label of â€Å"anomaly† has taken several forms. Many well-known anomalies such as the contrarian investment, the post announcement drift, the accruals anomaly and many others are just the beginning of an endless trip. There is absolutely no doubt that many more are going to be introduced and evidence for the ability for the investors to earn abnormal returns will be documented. Recently, academics try to expand their investigation on whether these well-documented anomalies are actually profitable due to several limitations (transaction costs etc) and whether the anomalies are connected. Many papers are exploring the connection of the anomalies with each other proposing the existence of a â€Å"principal† misspricing that is documented into several forms. The current study tries to look into the anomaly that was initially documented by Sloan (1996) and has been labelled as the â€Å"accrual anomaly†. The accrual anomaly can be characterised as being at an embryonic stage if the basis for comparison is the amount of publications and the dimensions of the anomaly that light has been shed on. The facts for the accrual anomaly suggest the existence of the opportunity for investors to earn abnormal returns by taking advantage of simple publicly available information. On the other hand, accruals comprising an accounting figure have been approached from different points of view with consequences visible in the results of the academic papers. Furthermore, Stark et al (2009) challenge the actual profitability of the accrual anomaly by simply taking transaction costs into consideration. The present paper employs an accrual strategy for a sample comprising of UK firms during 1991-2008. The aim is to empirically investigate the profitability of such strategies during the whole data sample. The methodology for the calculation of accruals is largely based on the paper of Hardouvelis et al (2009). Stark et al (2009) propose that the positive excess returns of the accruals’ strategy are based on the profitability of small stock. In order to investigate the aforementioned claim, the current study employs an additional strategy by constructing intersecting portfolios based on accruals and size. Finally, five variables are being investigating at the second part of the study for their predictive power on the excess returns of the constructed portfolios. The monumental paper of Fama and French (1996) documented an impressive performance of two constructed variables (the returns of portfolios named HML and SMB). In addition, the dividend yield of the FTSE all share index, the industrial production growth and the term structure of the interest rates will be investigated as they are considered as potential candidates for the prediction of stock returns. Chapter 2 Literature review 2.1. Introduction During the last century the financial world has offered many substantial advances. From the Portfolio Theory of Markowitz (1952) to the development of the Capital Asset Pricing Model of Sharpe (1964) and Lintner (1965), and from the market Efficient Market Hypothesis (hereafter EMH), developed by Fama (1965), to the recent literature that challenges both the CAPM and the EMH, they all seem to be a chain reaction.   The financial academic world aims to give difficult but important answers on whether markets are efficient and on how investors should allocate their funds. During the last two decades, many researchers have documented that there exist strategies that challenge the claim of the supporters of the efficient and complete markets. In this chapter, the effort will be focused on reviewing the financial literature from the birth of the idea of the EMH until the recent publications that confirm, reject or challenge it. In a determinative paper, Fama (1970) defined efficient markets and categorised them according to the type of information used by investors. Since then, the finance literature has offered a plethora of studies that aim to test or prove whether markets are indeed efficient or not. Well known anomalies such as the post announcement drift, the value-growth anomaly or the accruals anomaly have been the theme of many articles ever since. 2.2. Review of the value-growth anomaly We consider as helpful to review the literature for the value growth-anomaly since it was one of the first anomalies to be investigated in such an extent. In addition, the research for the value-growth anomaly has yielded a largely productive debate on whether the documented returns are due to higher risk or other source of mispricing. Basu (1970) concluded that stocks with high Earnings to Price ratio tend to outperform stocks with low E/P. Lakonishok, Shleifer and Vishny (1994) documented that stocks that appear to have low price to a fundamental (book value, earnings, dividends etc) can outperform stocks with high price to a fundamental measure of value. Lakonishok, Shleifer and Vishny (1994) initiated a productive period that aimed to settle the dispute on the EMH and investigate the causes of such â€Å"anomalies†. Thus, the aforementioned researchers sparked the debate not only on the market efficiency hypothesis but also on what are the sources for these phenomena. Fama and French (1992) supported the idea that certain stocks outperform their counterparts due to the larger risk that the investors bear. Lakonishok, Shleifer and Vishny (1994) supported the idea that investors fail to correctly react to information that is available to them. The same idea was supported by many researchers such as Piotroski (2001). The latter also constructed a score in order to categorise stocks with high B/M that can yield positive abnormal returns (namely, the F Score). Additionally, the â€Å"market efficiency debate â€Å"drove behavioural finance to rise in popularity. The value-growth phenomenon has yielded many articles that aim to find evidence that a profitable strategy is feasible or trace the sources of these profits but, at the same time, the main approach adopted in each study varies significantly. Asness (1997) and Daniel and Titman (1999) examine the price momentum, while Lakonishok, Sougiannis and Chan (2001) examine the impact of the value of intangible assets on security returns. In addition, researchers have found evidence that the value-growth strategies tend to be successful worldwide, as their results suggest. To name a few, Chan, Hamao and Lakonishok (1991) focused on the Japanese market, Put and Veld (1995) based their research on France, Germany and the Netherlands and Gregory, Harris and Michou (2001) examined the UK stock market. It is worth mentioning that solely the evidence of such profitable strategies could be sufficient to draw the attention of practitioners, but academics are additionally interested in exploring the main cause of these arising opportunities as well as the relationship between the aforementioned phenomena (namely, the value growth, post announcement drift and the accrual anomaly). In general, two schools of thought have been developed: the one that supports the risk based explanation or, in other words, that stocks yield higher returns simply because they are riskier, and the one that supports that investors fail to recognise the correct signs included in the available information. 2.3. The accruals anomaly 2.3.1. Introduction of the accrual anomaly. Sloan (1996) documented that firms with high (low) accruals tend to earn negative (positive) returns in the following year. Based on this strategy, a profitable portfolio that has a long position on stocks with low accruals and short position on stocks with high accruals yields approximately 10% abnormal returns. According to Sloan (1996) investors tend to overreact to information on current earnings. Sloan’s (1996) seminar paper has been characterised as a productive work that initiated an interesting to follow debate during the last decade. It is worth noting that even the very recent literature on the accrual anomaly has not reached reconciling conclusion about the main causes of this particular phenomenon and about whether a trading strategy (net of transaction costs) based solely on the mispricing of accruals can be systematically profitable. At this point it is worth mentioning that the accruals have been found to be statistically significant and negative to predict future stock returns. On the other hand, there are papers that examine the accruals and its relations with the aggregate market. A simple example is the paper published by Hirshleifer, Hou and Teoh (2007), who aim to identify the relation of the accruals, if any, with the aggregate stock market. Their findings support that while the operating accruals have been found to be a statistical significant and a negative predictor of the stock returns, the relation with the market portfolio is strong and positive. They support that the sign of the accruals coefficient varies from industry to industry reaching a peek when the High Tech industry is taken into account (1.15), and taking a negative value for the Communication and Beer/Liquor industry. 2.3.2 Evidence for the international presence of the phenomenon. Researchers that investigated the accruals anomaly followed different approaches. At this point, it is worth noting that the evidence shows the accrual anomaly (although it was first found to be present in the US market) to exist worldwide. Leippold and Lohre (2008) examine the accrual anomaly within an international framework. The researchers document that the accrual anomaly is a fact for a plethora of markets. The contribution of the paper though, is the large and â€Å"complete† number of tests used, so that the possibility of pure randomness would be eliminated. Although, similar tests showed that momentum strategies can be profitable, recent methodologies used by the researchers and proposed by Romano and Wolf (2005) and Romano, Shaikh and Wolf (2008), suggest that the accruals anomaly can be partially â€Å"random†. It is noteworthy that the additional tests make the â€Å"anomaly† to fade out for almost all the samples apart from the markets of US, Australia and Denmark. Kaserer and Klingler (2008) examine how the over-reaction of the accrual information is connected with the accounting standards applied. The researchers constructed their sample by solely German firms and their findings document that the anomaly is present in Germany too. We should mention at this point that, interestingly, prior to 2000, that is prior to the adoption of the international accounting standards by Germany, the evidence did not support the existence of the accrual anomaly. However, during 2000-2002, Kaserer and Klingler (2008) found that the market overreacted to accrual information. Hence, the authors support the idea that an additional cause of the anomaly is the lack of legal mechanisms to enforce the preparation of the financial statements according to the international accounting standards which might gave the opportunity to the firms to â€Å"manipulate† their earnings. Another paper that focuses on the worldwide presence of the accruals mispricing is that of Rajgopal and Venkatachalam (2007). Rajgopal and Venkatachalam examined a total of 19 markets and found that the particular market anomaly exists in Australia, UK, Canada and the US. The authors’ primal goal was to identify the key drivers that can distinguish the markets where the anomaly was documented. Their evidence supports the idea that an accrual strategy is favoured in countries where there is a common law tradition, an extensive accrual accounting and a low concentration of firms’ ownership combined with weak shareholders’ rights. LaFond (2005) also considers the existence of the phenomenon within a global framework. The author’s findings support the notion that the accrual anomaly is present worldwide. In addition, LaFond argues that there is not a unique driving factor responsible for the phenomenon across the markets. It is worth noting that LaFond (2005) documented that this particular market imperfection is present in markets with diverse methodology of accrual accounting. Findings are against the idea that the accrual anomaly has any relation with the level of the shareholders protection or a common law tradition, as suggested by Rajgopal and Venkatachalam (2007). Finally, the author suggests that, if any, it is not the different method of accrual accounting (measurement issues) that favours or eliminates the accrual anomaly, but the accrual accounting itself. 2.3.3. Further Evidence for the roots of the accruals anomaly. Additionally, papers such as those of Thomas and Zang (2002) or Hribar (2000) decompose accruals into changes in different items (such as inventory, accounts payable etc). The findings catholically suggest that extreme changes in inventory affect returns the most. On the other hand, many articles connect the accruals with information used by investors, such as the behaviour of insiders or analysts, as the latter can be considered a major signal to the investors for a potential manipulation of the firms’ figures. In particular, Beneish and Vargus (2002) documented that firms with high accruals and significant insider selling have substantial negative returns.  Bradshaw (2001) and Barth and Hutton (2001) examine the analysts’ reports and their relation with the accruals anomaly. Their findings support that the analysts’ forecasting error tends to be larger for firms with high accruals, while analysts do not revise their forecasts when new information for accruals is available. Gu and Jain (2006) decompose accruals into changes in inventory, changes in accounts receivable and payable and depreciation expenses and try to identify the impact of the individual components to the anomaly. Consistent with Sloan (1996), Gu and Jain (2006) document that the accrual anomaly exists at the components level. The findings are important since Desai et al (2004) supported the connection of the accrual anomaly with a single variable (cash flows from operations). The researchers suggest that the results yielded by Desai et al (2004) were highly dependent on the methodology used and thus, suggested that the accruals anomaly is â€Å"alive and well†. Moreover, other articles try to confirm whether the anomaly is mainly caused by the wrong interpretation of the information contained in accruals. Ali et al. (2000), investigate whether the naà ¯ve investors’ hypothesis holds. Following the methodology introduced by Hand (1990) and Walther (1997), they found that for smaller firms, which are more likely to be followed by sophisticated investors, the relation between accruals and negative future returns is weaker compared to larger firms, which are followed by many analysts. Therefore, the researchers suggest that, if anything, the naà ¯ve investors’ hypothesis does not hold. In contrast to other market anomalies where findings suggest that the naà ¯ve investors hypothesis holds, the accruals anomaly is suggested as unique. Shi and Zhang (2007) investigate the earnings fixation hypothesis suggesting that the accruals anomaly is based on the investors â€Å"fixation† or â€Å"obsession† on earnings. Their primal hypothesis is that if investors are highly based on the reports about earnings and misprice the value-relevant earnings, then the returns should be dependent not only on the accruals but also on how the stock’s price changes according to reported earnings.  The researchers’ hypothesis is confirmed and finding support that an accrual strategy for firms whose stocks’ price highly fluctuates according to earnings yields a 37% annual return. Sawicki and Shrestha (2009) aim to examine two possible explanations for the accruals anomaly. Sloan (1996) proposed the fixation theory under which investors fixate on earnings and thus overvalue or undervalue information for accruals. Kothari et al. (2006) proposed the â€Å"agency theory of overvalued equity† according to which managers of overvalued firms try to prolong the period of this overvaluation which causes accruals to increase.  The paper uses the insider trading and other firm characteristics and tries to compare and contrast the two major explanations. Evidence produces bd Sawicki and Shrestha (2009) support the Kothari et al. (2006) explanation for the accrual anomaly. In a relatively different in motif paper, Wu and Zhang (2008) examine the role that the discount rates play in the accrual anomaly. They argue that if anything, the anomaly is not caused by irrationality from the investors’side but by the rationality of firms as it is proposed by the q-theory of investment. They argue that since the discount rates fall and more projects become profitable (which makes accruals to increase) future stock returns should decline. In other words, if the capital investment correctly adjusts to the current discount rates, the accruals should be negatively correlated with the future returns and positively correlated with the current returns. The evidence of Wu and Zhang (2008) support that the accruals are negatively correlated with the future stock returns but the contribution of the paper is in that they document that current stock returns are positively correlated with the accruals. 2.3.4. The relation of the accrual anomaly with other market imperfections. Many papers examine the relation between the accruals anomaly and other well-known anomalies such as the post announcement drift or the value-growth phenomenon. Desai et al. (2002), suggest that the â€Å"value-growth† anomaly and the accruals anomaly basically interact and conclude that the  ¨accruals strategy and the C/P reflect the same underlying phenomena†. Collins and Hribar (2000) suggest that there in no link between the accruals anomaly and the â€Å"PAD†, while Fairfield et al. (2001) support that the accruals anomaly is a sub-category of an anomaly caused by the mistaken interpretation of the information about growth by the investors. Cheng and Thomas (2006) examine the claim that the accrual anomaly is a part of a broader anomaly (and more specifically, the value-glamour anomaly). Prior literature suggested that the operating cash flows to price ratio subordinates accruals in explaining future stock returns (Deshai et al (2004)). Their evidence suggests that the Operating CF to price ratio does not subsume neither abnormal nor total accruals in future announcement returns. This particular result does not confirm the claim that the accrual anomaly is a part of a broad value-glamour anomaly. Atwood and Xie (2005) focus on the relation of the accrual anomaly and the mispricing of the special items first documented by Burgstahler, Jiambalvo and Shevlin (2002). Their hypothesis that the two phenomena are highly related is confirmed since the researchers found that special items and accruals are positively correlated. Additionally, further tests yielded results that suggest that the two imperfections are not distinct, while the special items have an impact on how the market misprices the accruals. Louis and Sun (2008) aim to assess the relation between the abnormal accrual anomaly and the post earnings announcement drift anomaly. The authors hypothesize that both anomalies are related to the management of the earnings and thus, they aim to find whether the two are closely connected. The findings are consistent with the primal hypothesis, as they found that â€Å"firms with large positive change of earnings that were least likely to have manipulated earning downwards† did not suffer from PEAD, while the same result was yielded for firms that had large negative change of earnings that were least likely to have managed their earnings upwards. As supported by many researchers the value-growth anomaly and accruals anomaly might be closely related or they might even be caused by the similar or even identical roots.  Fama and French (1996) support that the book to market factor captures the risk of default, while Khan (2008) suggests that in a similar pattern firms with low accruals have a larger possibility to bankrupt. Therefore, many researchers try to connect the two phenomena or to answer whether a strategy based on the accruals can offer more than what the value growth strategy offers. Hardouvelis, Papanastopoulos, Thomakos and Wang (2009) connect the two anomalies by assessing the profitability of interacting portfolios based on the accruals and value-growth measures. Their findings support that positive returns are obtainable and magnified when a long position is held for a portfolio with low accruals while combined with stocks that are characterised as high market to book. The difference of a risked-based explanation or an imperfection of the markets is considered to be a major debate, as it can challenge the market efficiency hypothesis. Many researchers, such as Fama and French (1996) noted that any potential profitable strategy is simply due to the higher risk that the investors have to bear by holding such portfolios. In a similar way, the profitable accruals strategies are considered as a compensation for a higher risk. Stocks that yield larger returns are compared or labelled as stocks of firms that are close to a financial distress. Khan (2000) aims to confirm or reject the risk-based explanation of the accruals anomaly. The researcher uses the ICAPM in order to test if the risk captured by the model can explain the anomaly first documented by Sloan (1996). It is worth noting that the descriptive statistics results for the sample used showed that firms that had low accruals also had high bankruptcy risk.  The contribution of the paper is that, by proposing a four factor model enhanced by recent asset pricing advances, it showed that a great portion of the mispricing that results in the accrual anomaly can be explained within a risk-based framework. Furthermore, Jeffrey Ng (2005) examines the risk based explanation for the accrual anomaly which proposes that accruals include information for financial distress. As proposed by many papers, the accrual anomaly is simply based on the fact that investors bare more risk and thus low accrual firms have positive abnormal returns. The researcher tries to examine how and if the abnormal returns of a portfolio which is short on low accruals stocks and long on high accrual firms changes when controlling for distress risk. Evidence supports that at least a part of the abnormal returns are a compensation for bearing additional risk. Finally, the results support that the big portion of the high abnormal returns of the accrual strategy used in the particular paper is due to stocks that have high distress risk. 2.3.5. The accruals anomaly and its relation with firms’ characteristics. A noteworthy part of the academic literature examines the existence of some key characteristics or drivers that are highly correlated with the accruals anomaly. Many researchers have published papers that aim to identify the impact of firm characteristics such as the size of the firm, characteristics that belong to the broader environment of the firms such as the accounting standards or the power of the minority shareholders. Zhang (2007) investigates whether the accrual anomaly varies cross-sectionally while being related with firms’ specific characteristics. The researcher primarily aims to explain which the main reason for the accrual anomaly is. As Zhang (2007) mentions, Sloan (1996) attributes the accrual anomaly to the overestimation of the persistence of accruals by investors, while Fairfield (2003) argues that the accrual anomaly is a â€Å"special case of a wider anomaly based on growth†. The evidence supports the researcher’s hypothesis that characteristics such as the covariance of the employee growth with the accruals have an impact on the future stock returns. Finally, Zhang (2007) documents that that accruals co-vary with investment in fixed assets and external financing. Louis, Robinson and Sbaraglia (2006) examine whether the non-disclosure of accruals information can have an impact on the accruals anomaly. The researchers, dividing their sample into firms that disclose accruals information on the earnings announcement and firms that do not, investigate whether there exists accruals’ mispricing. The evidence supports that for firms that disclose accruals information, the market manages to correctly understand the discretionary part of the change of the earnings. On the contrary, firms that do not disclose accruals information are found to experience â€Å"a correction† on their stock price. Chambers and Payne’s (2008) primal aim is to examine the relation of the accrual anomaly and the auditing quality. The researchers’ hypothesis is that the accruals mispricing is related with the quality of auditing.  Additionally, their findings support that the stock prices do not reflect the accruals persistence characterising the lower-quality audit firms. Finally, their empirical work finds that the returns are greater for the lower-quality audit portfolio of firms. Palmon, Sudit and Yezegel (2008) examine the relation of the accruals mispricing and the company size. Evidence shows that company size affects the returns and, as the researchers documented, the negative abnormal returns are mostly due to larger firms while the positive abnormal returns come from the relatively small firms. Particularly, as the strategy with the highest profits they found the one that had a short position in the largest-top-accrual decile and a long position in the smallest-low-accrual decile. Bjojraj, Sengupta and Zhang (2009) examine the introduction of the Sarbanes-Oxley Act and the FAS 146 and how these two changes affected the accrual anomaly. FAS 146 (liabilities are recognized only when they are incurred) reduced the company’s ability to â€Å"manipulate† earnings while the SOX aims to enhance the credibility of the financial statements. The evidence recognises a change on how the market conceives information about restructurings charges. The authors propose that a possible explanation is that before the introduction of SOX and the FAS 146, the market was reluctant due to the ability of the firms to manage earnings. Finally, Bjojraj, Sengupta and Zhang (2009) document that post to the FAS 146 and the SOX act, low accrual portfolios do not generate positive abnormal returns. 2.4. The applications of the accruals phenomenon and reasons why it is not arbitraged away. The importance of the analysis of the anomalies is substantial for two reasons. Firstly, the profitability of a costless strategy challenges the EMH, especially if the strategy is based on bearing no additional risk. Secondly, managers’ incentives to manipulate the financial statements and consequently the accruals would be obvious if a profitable strategy based on such widely available information existed. Chen and Cheng (2002) find that the managers’ incentive to record abnormal accruals is highly correlated with the accrual anomaly. The hypothesis of the researchers, which their findings support, was that the investors fail to detect when the managers aim to record abnormal accruals and that may contribute to the accruals anomaly. Richardson’s (2000) main objective is to examine whether the information contained in the accruals is utilized by short sellers. As the researcher mentions, previous articles such as that of Teoh and Wong (1999) found that sell side analysts were unable to correctly â€Å"exploit† the information contained in accruals for future returns. Richardson suggests that short sellers are considered as sophisticated enough to utilize the accruals information. Findings confirm previous work, such as that of Sloan (2000), who suggests that even short sellers do not correctly utilize the information contained into accruals. Ali, Chen, Yao and Yu (2007) examine whether and how equity funds benefit from the accrual anomaly by taking long position into low accruals firms. The researchers aim to identify how exposed are the equity firms to such a well known anomaly and what characteristics these funds share. By constructing a measure called â€Å"accruals investing measure† (AIM), they try to document the portion of the low accruals firms into the actively managed funds. The evidence shows that generally funds are not widely exposed to low accruals firms but when they do so, they have an average of 2.83% annual return. It is worth noting that the annual return is net of transaction costs. Finally, the side-effects of high volatility in returns and in fund flows of the equity funds that are partially based on the accrual anomaly might be the reason behind the reluctance of the managers. Soares and Stark (2009) used UK firms to test whether a profitable accrual strategy is feasible net of transactions costs. Their findings support that indeed the accrual anomaly is present in the UK market. The authors suggest that for such a strategy to be profitable, someone is required to trade on firms with small market capitalization. They also suggest that although the accruals’ mispricing seems to exist also in the UK, the transaction costs limit the profits to such an extent that the accrual anomaly could be difficult characterised as a challenge to the semi strong form of the efficient market hypothesis. Finally, we should not neglect to mention two papers that discourse on why the markets do not simply correct the accruals anomaly. According to the classical theory, markets are so imperfect that can produce the incentive to the market to correct the â€Å"anomalies† at any point of time. Mashruwala, Rajgopal and Shevlin (2006) examined the transactions costs and the idiosyncratic risk as possible reasons of why the accrual anomaly is not arbitraged away. The researchers aimed to investigate why the market does not correct the anomaly, but also to identify whether the low accruals firms are riskier. The paper poses the question of what stops the informed investors from taking long positions into profitable stocks according to the accrual anomaly so that they can arbitrage it away. The paper examines the practical difficulty of finding substitutes so that the risk can be minimized and its relation with the accrual anomaly. Additionally, the paper investigates the transaction co sts and findings support that according to the accrual anomaly, the profitable stocks tend to be the ones with low stock prices and low trading volume. Lev and Nissim (2004) focus on the persistence of the accr

Wednesday, November 13, 2019

Persuasive Essay Against Capital Punishment -- Papers Death Penalty Ar

Persuasive Essay Against Capital Punishment â€Å"Kill. (Verb) To make someone or something die.† Does anyone really think they have the right to take another person’s life? Apparently yes. Perhaps we should give the judge a knife and tell her that if she has decided that the accused is guilty, she should stab him herself. Perhaps then she would hesitate. But if many people (hundreds or thousands who operate the judicial system) are involved, it spreads, or even divides the feeling of culpability among many. They may feel less guilty, especially if they believe that they are representing the whole society of their country. What makes it seem more â€Å"humane† is the official perspective of it. Death here is a matter of paperwork, not actually a case of ending someone’s life. I am absolutely opposed to the death penalty. In this essay I will try to explain why I think society should not accept this barbaric punishment. The most common argument in favour of the death penalty is that it is a deterrent, i.e. someone who has murder in mind will think better of it when he realises that he could be facing death. However, I do not agree with this. When a murderer commits a crime he believes that he will not be caught. Numerous studies have tried to prove the deterrence factor, but have been unable to. A criminal dreads a lifetime prison sentence more than, or the same as, the death penalty in any case. There are two types of murders: crimes committed on the â€Å"spur of the moment† (i.e. passion crimes which have not been planned) and pre-meditated murder. If it is a crime of passion, the murderer is not thinking of the consequences at t... ... are then disbarred. They have little incentive to fight for the case when their salary may be under  £4 an hour. Finally, who are we to play with the lives of other people? Each person is just one life – how can one life be allowed to designate when the other must finish? Man is man, not God. Only God should have a divine right over a man’s life. Man is equal to man, and for him to take on the role of a superior being can only cause chaos. I believe that it is the duty of a system of justice to protect society from criminals, either by psychological rehabilitation or by imprisoning them for life if necessary; not by murdering them. Capital punishment is used to condemn the guilty of severe crimes. This means: to teach a criminal how to be humane, they must be killed inhumanely. Does this seem logical?