Monday, November 25, 2019

buy custom Oil essay

buy custom Oil essay The price of oil in the 1960s and early 1970s was very low and relatively stable at about $0.015 per liter. However, this price started rising sharply from 1973 ($0.08 per liter). This price stabilized for around five years. It rose again sharply peaking in 1980 at around $0.23 per liter (Figure 2). The price of oil began to fall steadily from this peak for about five years before falling sharply in 1986 to around $0.09 per liter. For the next fourteen years, the price remained relatively low and stable, fluctuating between $0.08-0.14. The price again started to rise from the year 2000 (around$0.18) surpassing the $0.23 mark in 2004. It rose steadily until for four years before rising sharply in 2008, peaking at around $0.62 per liter. It again fell sharply the next year to about $0.39 per liter. The price of oil in 2010 was around $0.5 per liter. The rise in 2008 was distinct because, in just one year, the rise in oil price was the sharpest since the 1960s and it was the highest ever price. The rise in petrol was more than it had in the past seven years (AA, 2007; 2008). The rise in oil prices can be attributed to the normal phenomenon where prices tend to rise over time due to inflation, especially as the value of currencies fall. The demand for oil has been increasing due to many factors. For example, the average incomes of the peoples have been rising over the years (Scheuble, 2011). Hence, the people travel more, consume more products, buy more electrical appliances, use air conditioners, install heaters in their homes and other stuff that use oil. Therefore, they tend to demand for more oil. Additionally, there are not many adequate substitutes for oil in the market today. Yes, the price of oil has been increasing, but also the price of substitutes like natural gas has also been rising (Trading Economics). People find no alternatives hence they just have to use more oil. The price of oil has been rising but the price of oil complementary products has been falling. For example, there are many energy efficient cars being brought into the market which tend to be cheaper, hence many people buy them. Therefore, the demand for petrol, hence oil increases. Lastly, the advancing technology in the world today has made possible the efficient and economic use of oil (BP, 2011). This is effect makes people to use more oil increasing its demand even more. How the Increase in Oil Price Affects Demand How the quantity of oil demanded reacts to the changes in the price of oil (sensitivity) is estimated using its own price elasticity of demand for oil. This is measured as a fraction of the percentage change in the quantity of oil demanded to the percentage change in the price of oil with the assumption that this demand will only be affected by the change in price (Mind Tools). By applying the law of demand and supply, the own price elasticity of demand of oil is always negative. A relatively high price elasticity demand in the short term means that the increase in oil will not affect the quantity demanded very much, in the short term. However, the change in quantity demanded will be significant in the long term if the situation is maintained (Drum, 2011). For example, the price elasticity of demand for the UK is about -0.08% in the short term and -0.64% in the long term (Gateley Huntington, 2002, p.51). If, lets assume, the price of oil rose by even 50%, the demand will fall by 4% in the short term but by 32% in the long term. This supports the fact that a relatively high price elasticity will be felt more in the long term. Effect of Price on Demand When the price of oil increases considerably, people are forced to find alternative sources of energy such as natural gas, coal, wind power and solar energy. People will also be forced to do more walking, cycling, use public transport or make use of bio-fuels. However, such a switch will need time to take effect as new power stations and windmills will have to be built first. Installation of solar energy power stations will also need time. Therefore, the change (negative) demand for oil will not be felt in the short term but on the long term once all these alternatives are put in place. If the income levels remained the same yet the oil price increased, the demand for oil will diminish as people wil deem too expensive and take up a big portion of their income(). As a result, they will be forced to take cost cutting measures like reducing their travels and lowering thermostats in winter. This will reduce the demand for oil. Additionally, when the price of oil increases, people can switch to more efficient means of using oil like fuel efficient cars, insulation in their places of work and homes which will in turn reduce their usage, hence demand, for oil. Such measures will affect the demand for oil in the long term. Production Trends of Oil The production of oil has been rising steadily over the years. The total production of oil has largely been contributed by both the OPEC block of countries and those that are not members. In the mid 1960s, the total oil production was about 1600 million tonnes. There was a steady increase in the total production until about 1973 where it stabilized at about 2800 million tonnes. There was a slight decrease in production, reaching about 2650 million tonnes in 1975. The production then picked up for the next four years, reaching around 3200 million tonnes in 1979. For about the next three years, the total production went down steadily reaching around 2700 million tons in 1982. The total production remained relatively stable for about three years. Ever since 1986, the total world production has been increasing steadily over the years. By the year 2000, total production had exceeded 3500 million tons. However, the production has somewhat stabilized at about 3900 million tons from the year 2004 to date. Relationship between oil price and production The total production was lowest in 1965, this also happened to be the lowest price of oil. However, the total oil production increased in the increased steadily for about eight years. On the contrary, the price of oil remained relatively low and stable during the same period. This was not to be in around 1973, when the prices shot up sharply. This was also the time that the total world production remained stable and even decreased. This can be attributed to the decreased production of the OPEC block of countries. The price of oil remained stable for about five years. During this time, the total production had also picked up and was steadily increasing. This scenario was not to be in late 1970s when the price again rose sharply, peaking in 1980. At this period, the production was steady in its increase. However, at around 1980, the total production started to decrease steadily for about four years. Again, this decrease was caused by the reduced production of OPEC countries. The price also fell steadily although it fell sharply from 1984-1985. From then on, it fluctuated, but not wide margin changes although it decreased generally from the year 1990 reaching a relatively low, less than $0.02 per liter (1960 prices). It rose sharply in 2000 before rising steadily for seven years. The price of oil rose sharply in 2008, its highest ever before it fell down the next year, again sharply. It rose in 2010. However, while the price has been fluctuating since the mid 1980s, the total oil production was increasing steadily until around 2004 where it has quite stabilized. From the description of the relationship between oil production and the corresponding prices over the years, theres no clear pattern between the two as expected and as defined by the law of demand and supply (Ramcharan, 2002). This may be due to the expected oil prices in the future. In general, it is expected that the prices will be high in the future. These expectations make the oil producing countries reluctant to exploit their reserves at the moment as theres greater incentive in producing in the future. Therefore, current prices wont affect this resolve. Secondly, the cost of oil production is a limiting factor. The fixed cost in exploiting and producing oil is expensive, but the there are many constraints in varying the production on the short term once the pumping from the field has started. Additionally, expansion of this exploration becomes more expensive and limited due to increasingly smaller oil fields. These two factors make the production of oil relatively steady and qu ite unresponsive to current price. Most oil producing countries are politically unstable. These regular bouts of fighting affects the oil production in the countries hence the total production. TTherefore, the price changes wont affect the production that much. The fear of depletion of the oil reserves has also affected the production more than the price (Hubbert, 1956). Speculation of oil by countries where either a country can buy and store oil expecting future price increase or buying oil future so that the country is guaranteed oil in the future also has affected current production of oil (Lombardi and Robays, 2011). In all these cases, the price plays a limited role in the total production of oil. Why the price of Oil was so high in 2008 The price of oil was at its peak in 2008. This was partly due to the weak US dollar at the time. This is because most oil transactions are done in dollars hence the oil producers will peg their respective currencies to the dollar. Therefore, when the dollar weakens, their revenues decline but costs increase. To offset this imbalance, the OPEC countries raised their prices to preserve their profit margins (Amadeo, 2011). The year 2008 was also the year the US and the world experienced an economic crisis due to falling stocks and a declining real estate industry. As a result, most investors, fearing the worst, ditched the stock markets and instead bought oil futures which created a speculative bubble. The consequence of this was an unprecedented surge in the oil prices (Amadeo, 2011). Another factor that contributed to the surge in prices was the instability in Nigeria and Venezuela, which is the ninth largest oil producer in the world (Tristam, 2011). Additionally, there was a real threat of a US/Israeli attack on Iran (Roberts, 2008). These two factors increased the demand for oil as countries built their stocks, fearing disruption. This sudden increase in demand caused the sharp of oil prices. Why the Price of Oil Fell after July 2008 The escalating oil prices were felt in almost all sectors of the economy. However, the scenario was arrested quickly. The main factor in the fall of the prices was Saudi Arabias well timed increase in its production. As a result, there was more oil fuel pumped into the market which helped to reduce the price (Amadeo, 2011). The recession in 2008 hit the worlds largest economies; The USA and Europe very hard. The economic crisis showed no signs of slowing down (Leigh). Therefore, many of the people in these countries were wary of their spending. The weakening of the dollar also reduced the purchasing power of many Americans (Goldman, 2008). Many took cost cutting measures like driving less. All these factors led to a low demand for oil. This in effect contributed to the falling of the oil prices. Will the Price of Oil Remain High? The price of fuel has been relatively high ever since the brief drop in mid June 2008 to early 2009. It has remained high ever since and this scenario will continue to be (Rhodes). This is because political instabilities still persist in the major producers, especially in the Middle East where terrorist activities are still rife. Iraq is not yet peaceful since it was invaded by The USA. The Iran-US relations have yet to cool down and the threat of attacks and counter attacks are still real. The situation in Nigeria has yet to be solved. One of the major oil producers, Libya is going through political turmoil. All these suggest that oil production will continue to be hampered in the near future (Leigh). Therefore, the price will be high unless theres relative peace in these countries. Although the demand for oil dropped in 2008, it was mainly due to the recession at the time. Once the economies pick up, people will be more liberalized in their spending hence the demand for oil will continue to be high. This simply means that the price will continue to be high. The production of oil is largely influenced by OPEC. This block always tends to influence the prices of oil so that its countries generate more revenues. This means that as long as this trend continues, the prices will be high. Technology has been advancing in terms of fuel production. However, the world is yet to witness an adequate substitute for oil that will make it decrease its reliance on oil as its primary fuel. Therefore, until and when such an alternative is found, the price of oil will just continue to be high. Buy custom Oil essay

Thursday, November 21, 2019

Combating Compassion Fatigue Essay Example | Topics and Well Written Essays - 1500 words

Combating Compassion Fatigue - Essay Example Burnout is one of the major concepts of compassion fatigue. The signs of burnout, according to Espeland (2006), includes that the nurses are always exhausted, they are cynical and feel detached, and they feel that they are ineffective. They also exhibit signs that include anger, depression, paralysis, feeling stuck, irritability, cynicism, bitterness and negativity towards others, the self, and the world (Espeland, 2006). Job stress is another concept of compassion fatigue, according to Chen et al. (2009). They state that signs of job stress include job absences, conflicts with staff members, depression, staff turnover, and inferior caregiving. The difference between job stress and burnout is that burnout is the result of unrelenting job stress, over a period of time, therefore job stress is a lessor version of burnout. Compassion fatigue itself is an expanded version of burnout. As stated below, compassion fatigue is really burnout plus the fact that the nurses have to deal with very sick and dying patients, much of the time, as with oncology nurses, who exhibit high levels of compassion fatigue. According to Bush (2009), the signs of compassion fatigue are that the nurse identifies and integrates the grief, emotions and fears of their patients, and this means that their own stress and emotional pain are exacerbated. The nurses experience a kind of vicarious trauma in these situations, as they absorb the emotions of their patient, and this affects the nurse’s perceptions of trust, safety, self-esteem, control, and intimacy (Bush, 2009). Nature of the Problems and their Causes The nature of the problem of burnout is that it results in severe mental fatigue and is an energy drain, according to Espeland (2006). Espeland (2006) states that burnout also results in depersonalization and a reduced feeling of accomplishment. Espeland (2006) further states that there are five work situations which might contribute to job burnout. One is that there is ambiguity on the job, as there is a lack of goals and information. No-win situations represent another type of employment issue which contributes to burnout, and this means that the manager is always dissatisfied, no matter how well the nurses perform. Role overload is the third situation, and this means that the nurses have too many responsibilities. Role conflict is the fourth situation, which means that there are conflicting responsibilities and the nurses feel pulled in different directions. The fifth situation is when the nurses are underpaid, despite the fact that they work hard. Compassion fatigue is slightly different from burnout, but described by Bush (2009) as being an expanded form of burnout. In this case, it is distinguished from burnout, according to Bush (2009) by the fact that, in addition to there being stressors in the workplace, like between

Wednesday, November 20, 2019

Healing Hospitals Essay Example | Topics and Well Written Essays - 1000 words

Healing Hospitals - Essay Example Hence this suggests that those medical institutions which follow the concept of the healing hospital will be tagged on the principle of Golden Thread that is a symbol of faith, conviction and reliance upon the Almighty (Chapman & Chapman, 2004).   It is a common phenomenon for every medical center which aims to promote the model of a healing hospital all over the globe must provide and calm, peaceful and a quite internal environment to the patients. This would let them feel tranquility and harmony and assist in their fast recovery. Harmonious environment plays a great role for the sufferer as most part of their recovery takes place when their body is in sleep or hibernating peacefully. Thus it suggests the fact that not only proper medicines, staff, latest technologies and equipments make a good hospital but it’s the factor of concern, peace and tranquility that make up the hospital a healing one. Components of a Healing Hospital There are three basic key components of a hea ling hospital which are as follows: The assimilation of work layout and advancement It is one of the most important components of the healing hospital and plays a vital role in the scenario. ... By the adoption of such an obliging policy patients would feel loved, cared and concerned by the doctors and other hospital staff members which would lend a side hand to them for their fast recovery. A remedial substantial environment This component considers the fact that not only the doctors are responsible for the care and treatment of the patients but also it is essential for the other staff members of the hospital to connect with their families and relatives as caregivers. It has been proven widely that if the hospitals manage to establish an environment with compassion, love, care, concern, adore, empathy etc then the patients would depict a faster recovery from their pain, stress and illness. Hence it would also lead the family members of the patients to support the patient in a healthy and lovable environment where they are assured that their loved ones would be treated as home (Gaut et al, 1994). A background of fundamental affectionate heeds It is basically establishment of a radical loving care environment and this philosophy was initially proposed by Erie Chapman long time ago. This accounts for the fact that such a culture must be enforced in the healing hospital in which each staff member must be known as in why they have become part of this sacred industry and what their ultimate objectives in serving the humanity are. It endorses the recovery of patients by means of a holistic loom that not only caters to the patients need but also takes care of their divine and poignant needs as well. The challenges of creating a healing environment in light of the barriers and complexities of the hospital environment Although the establishment of a healing hospital seems an easy task however it is not a piece of cake. It requires a long tenure and excessive

Monday, November 18, 2019

Private Finance Initiative Essay Example | Topics and Well Written Essays - 1750 words

Private Finance Initiative - Essay Example TUPE enables these individuals to enjoy the status of being public and private workers simultaneously. This arrangement is intended to relieve the government of a heavy burden of initiating and funding projects across the country. The PFI is a program that began Britain and Australia before spreading to most of the Western countries and eventually to the rest of the world. Since early 1990s, PFI has grown into one of the common ways to develop public investments1. The program is being used to develop many different types public infrastructure. With a PFI, private organizations can place bids on these infrastructure projects, and reverse the conventional trend, whereby developing public projects were solely the responsibility of the government. The private investor that emerges the winner in the bidding process is normally awarded the contract to develop and maintain the infrastructure project. FPI enables private companies to benefit from a permanent profit from such an initiative2. In most cases, governmental organizations are not ready to handle big projects, but they do want to make sure the projects are. By engaging the private sector through a PFI, this is tenable. Apart from relieving the government of the burden of laying infrastructures, a private finance initiative reduces the amount of tax being channelled to such projects. When the private investors shoulder the larger percentage of the funding, the government can then concentrate on other important projects. PFI projects In many cases, the method of construction that is implemented by governments has been based on placing the burden on the PFI contractors to design, bid and build the public assets. Under these criteria, the public organizations often come up with a design for a public infrastructure project. This work may be done by internal experts, or it may be awarded to a private company specialized in architecture. Once the plan is authorized, the government then invites bids from privately own ed construction companies, thus paving way for the winning bidder to construct the facility3. Many projects for government facilities have conventionally had extended private sector contacts to cater for maintenance. Typical cases of a PFI are court facilities and government offices that have been built on privately owned buildings. The health care industry is also not left out: many small government-owned health care facilities are operating in private sector premises. Better Service Delivery Private finance initiative has been implemented in the United Kingdom, where the government emphasized its significance and contribution toward better service delivery to citizens. In 2002, the government announced that it would engage the private sector more, especially to improve the quality of services in the healthcare industry4. The government made public its intention to ensure that quality services were achieved by approving contracts that had met the thresholds of quality. But whereas PFI can be more costly to implement as compared to conventional government funding, since public institutions enjoy lower lending rates than the private investors, most of the governments around the world have held the belief that the increased costs of amassing the needed finances by the private sector will remain etched in the better services for a far longer period of time5. Additionally, proponents of a PFI believe that there will be efficiency in savings. Market forces have also proved the government wrong: private companies have

Saturday, November 16, 2019

Flow Phenomena Within a Compressor Cascade

Flow Phenomena Within a Compressor Cascade Paolo Mastellone section{Aim of the investigation} The scope of the assignment is to study and assess the flow phenomena within a compressor cascade employing controlled diffusion blades through a computational fluid dynamic simulation. The results of the simulation are subsequently compared to the experimental data obtained from the simulated cascade. The quality and the discrepancies are discussed in order to demonstrate the understanding of the theory and the application computational tools. section{Experimental data} The simulation is based on the experimental work done by Hobson et al.cite{rif1} that studied the effect of the Reynolds number on the performances of a second generation controlled-diffusion stator-blades in cascade. The three Reynolds numbers evaluated were 6.4E5, 3.8E5 and 2.1E5. This work was carried out in order to analyse a more representative Reynolds number of flight conditions and to create a test case for computational fluid dynamic models of turbulence and transition. The experimental cascade is made of 10 67B stator blades with an aspect ratio of 1.996 and the solidity of 0.835. The tecnique used for the experimental measurement is the laser Doler velocimetry (LDV) with a seed material of 1$mu $m oil mist particles. The experimental data and e cascade geometric parameters are shown in the figures below. The Reynolds number used for the simulation is 6.4E5, which gives an inlet velocity of: $$ where is the kinematic viscosity and $L$ is the blade chord. section{Mesh} The software used for the mesh generation is ANSYS ICEM. The mesh has a critical importance and consequences on simulation and results, a well-constructed mesh eliminates problem of instabilities, absence of convergence and increase the opportunity to achieve the right solution cite{rif4}. There are key aspects to take into account, the mesh must capture the geometric details and the physics of the problem.\ The discretization is made for one representative flow passage introducing periodic boundary conditions. The fluid domain thickness is half of the blade spacing in order to use properly the periodic boundary conditions: the fluid quantities at the top and the bottom of the domain will be the same, in order to represents the periodicity of the cascade. The inlet and the outlet distances from the blade are respectively 2.5 and 3 times the blade chord so that their position doesnt have an influence on the results and the flow is fully developed at this stations. In order to get low numerical diffusion the mesh must be aligned with the flow directioncite{rif2}, consequently to have the same geomety of the simulation the blade is staggered of $ang{16.3}$ and the inlet grid inclination is $ang{38}$ while the outlet one is $ang{5.5}$. The mesh is a structured type made of quadrilateral elements, because they can be fitted to flow direction and are quite tolerant of skew and stretchingcite{r if2}. To adapt the mesh at the profile an O-grid type made of 9 blocks is used. subsection{First node position} One major parameters for the mesh sizing is the non dimensional distance $y^+=frac{u^+y}{nu}$. This parameter must be chosen as a function of the type of boundary layer treatment. The use of a wall function consents to bridge the explicit resolution of the near wall region, which is described by the dimensionless parameters $u^+$ and $y^+$. The turbulent boundary layer is subdivided into the viscous sub-layer for $y^+RNG. For the k-$omega$ SST a near wall treatment has been chosen and hence a $y^+=1$, which resulted in first node distance of 0.004 mm. With the K-$epsilon$ RNG model a standard wall function has been adopted and choosing $y^+=25$ the first node distance is 0.1 mm. subsection{Grid independence study} The number of nodes required for a 2D simulation with resolved boundary layers is around 20000 while is around 10000 nodes if a wall function is used cite{rif2}. The grid adopted for the K-$omega$ SST has 20128 nodes. The mesh for the K-$varepsilon$ RNG model, which uses a wall function, has 14488 nodes. The two meshes have been chosen between three types with increasing resolution: a coarse, an intermediate and a finer one. The Cd and Cl values obtained from the three meshes are displayed in the table below for the two different turbulent models used for the simulation: k-$omega$ SST and k-$varepsilon$ RNG. A grid independence study and mesh quality analysis have been effectuated for both the meshes of the two different models, and satisfactory results were achieved. In the assignment just the mesh analysis of the K-$omega$ SST model with $y^+=1$ has been reported.\ The difference between the values of Cl and Cd of the intermediate and the fine mesh are negligible, hence the results dont rely upon the mesh resolution anymore and a further increase of the nodes is ineffective. Consequently the intermediate mesh has been adopted in both cases since the results are mesh-independent. The quality of the mesh can be analysed through specific tools available in the software. The overall quality level is acceptable, above 0.85 over 1, even if there are some parts that can be improved. Indeed the skewness at the top due to the curved flow profile and near the trailing edges should be reduced. The region not interested by the wake and the upper and lower parts have been left intentionally coarse since there is not presence of steep gradient in these regions (see figure 10). The quite high aspect ratio in the zones in front and behind the blade can be tolerated because it hasnt a great influence since the mesh is parallel to the flow. The outcomes are displayed below. section{Simulation} The software used for the simulation is ANSYS FLUENT with double precision and four processors enabled for the calculations. The problem has to be properly set up through subsequent steps. subsection{Solution setup} In this section the inputs for the simulation must be implemented. The mesh has to be scaled to the proper geometric dimensions (mm) and afterwards has to be checked to find eventual errors. The solver is a pressure-based type and the simulation is 2D planar. The turbulent model used and compared are the K-$varepsilon$ RNG with a standard wall function and the K-$omega$ Shear Stress Transport both with default model constants. The methods use two separate transport equations for the turbulent velocity and length scale which are independently determined cite{rif5}. The first model is characterised by robustness,economy and reasonable accuracy. The RNG formulation contains some refinements which make the model more accurate and reliable for a wider class of flows than the standard K-$varepsilon$ model cite{rif5}. It is semi-empirical and based on the transport equations for the turbulence kinetic energy ($K$) and its dissipation rate ($varepsilon$) cite{rif5}. The limit of this model is the assumption of complete turbulent flow, which is not the case in consideration.\ The second model is also empirical but is based on the specific dissipation rate ($omega$). The K-$omega$ SST is an improvement of the standard K-$omega$ and it is more reliable and accurate for adverse pressure gradient flows because it includes the transport effects for the eddy viscosity cite{rif5}. This model should capture more accurately the flow behaviour because of the adverse pressure gradient on the suction side of the blade. The fluid used is air, the specific heat and the thermal conductivity are kept constant as well as the density and the viscosity. Indeed the Reynolds and hence the velocity field are low and the problem can be considered incompressible, as a consequence the energy equation is not necessary.\ The boundary conditions for the blade profile, the outlet and the lateral edges have been set to wall, pressure outlet and periodic respectively.\ For the inlet boundary condition the velocity-inlet has been selected, through the magnitude and direction method, the main velocity from the Reynold number is 73.56 m/s and the components are $x=cos(38degree)=0.78801$ and $y=sin(38degree)=0.61566$. For the turbulence definition the intensity and length scale method is used since there are no informations about the value of $K$, $omega$ and $varepsilon$ but just about the inlet turbulence. The value of the turbulence intensity is determined by the formula: $$ The turbulent length scale, from the Fluent manual, is: $$ which is an approximate relationship based on the fact that in fully-developed duct flows, $ell$ is restricted by the size of the duct since the turbulent eddies cannot be larger than the duct cite{rif5}. subsection{Calculation parameters} In this step the parameters to achieve the solution are decided. The calculation has been split into two parts: in the first one the solution method has a simple scheme with a first order Upwind spatial discretization; the second one has a coupled scheme and is second order Upwind. In the first part a first-order accuracy result is achieved and is used as the input for second part of the calculation.\ The monitors are enabled to assess the convergence of the calculation. For the residuals the convergence criterion has been set to 1E-6 for continuity, x-velocity, y-velocity, energy, k and $omega$. Other two monitors for Cl and Cd have been added to appraise the convergence. For Cd the vector components are x = 0.78801 and y = 0.61566 although for Cl are x = -0.61566 and y = 0.78801. Their their value must be asymptotic when the solution converges. The last parameter used to check the convergence is the net value of mass flow flux inside the domain, which must be zero. To initialize the solution an hybrid method is used, afterwards the calculation can be run. section{Results} subsection{Convergence} The convergence has been reached after 479 iterations for the k-$omega$ SST and after 410 for the k-$varepsilon$ RNG. From the reports the mass flow flux can be evaluated, the difference between the inlet and the outlet is in the order of 1E-7 in both cases. According to this outcomes the convergence has been verified and the validation of the simulation results with the experimental study can be performed. subsection{Post processing} The post processing of the results is useful to understand the validity of the simulation.\ From the velocity contours the acceleration of the fluid on the suction side and the deceleration on the pressure side is captured. The pressure contours show the depression on the suction side and an overpressure on the pressure side. The stagnation point on the leading edge is highlighted by pressure and the velocity contours: the velocity is zero and the pressure reach the stagnation value. The separation of the fluid can be seen from the reverse velocity region on the rear part of the airfoil. The two methods made different predictions for the separation phenomenon. Indeed the velocity and the turbulence contours as well as the velocity pathlines show a less intense separation region and a smaller recirculation zone for the k-$varepsilon$ RNG model. subsubsection{K-$omega$ SST} subsubsection{Cp distribution} The Cp distribution is compared to the experimental one. The values from the paper have been extrapolated and inserted in a Matlab graph to give a better comparison. The Cp coefficient is defined by: $$ Cp = frac{p-p_{infty}}{1/2rho_{infty} V_{infty}^2}$$ where the value of $rho_{infty}$ and $p_{infty}$ are extracted from the Fluent reports in terms of mass-weighted average: The abscissa values from Fluent data has been normalised with the chord length in order to obtain the same type of graph. In the experiment for the low and the intermediate Reynold numbers there was a separation bubble between approximately 50 and 65% of the chord for Re=3.8E5 and between 45 and 70% for Re=2.1E5, while it was absent for the highest Reynolds number. The absence of the separation bubble is captures from both the models since the Cp coefficient rises continuously after the point of minimum pressure. The separation at about 80% of the chord is highlighted by flat trend of the Cp cite{rif6} by both models . On the pressure side the trends are very similar to the experiment. On the suction side a difference is observed after the 40% of the chord. Both the simulation results are shifted, a possible explanation could be the presence of 3D effects and secondary flows which are not captured by the 2D simulations. In the subsequent sections only one passage has been taken into account for the comparison with the results of Hobson et al.cite{rif1}. The stations 7,8,9 and 13 have been used for the observations (see figure 4). Station 7,8 and 9 have been taken perpendicular to the profile as showed in the paper. subsubsection{Wake profile} The wake profile presents the velocity distribution behind the blade leading edge, the measurement has been made at station 13 that is 20% of the chord downstream the leading edge. The data from the simulation were exported from Fluent and plotted on Matlab, the abscissa is normalised with the blade spacing S. Both the models highlight a profile similar to the experiment even if the wake wideness is underestimated. Anyway the obtained trends appear to be quite accurate. subsubsection{Turbulence intensity} The turbulence intensity profiles exhibit a trend similar to the paper. The figures has been divided by $sqrt{2}$ because of the different definition of turbulence intensity and the values on the abscissa have been normalised with the blade space S. The simulations captured the double-peaked distribution due to the boundary layer separation. The peaks are in correspondence of the maximum velocity gradient in the wake profile (see figure 27), likewise the experimental data. The outcomes ofÂÂ   K-$omega$ SST are more similar to the paper trend. The underestimation of the wake amplitude is consistent with the previous graph. subsubsection{Outlet flow angle} The velocity flow angle distribution has considerable differences compared to the paper data. A likely explanation could be the limitation of the simulation that can capture only the 2D flow characteristics, while the significant flow angle is primarily caused by the secondary flows in the cascade which are typical 3D effects. This is supported by the fact that the trends predicted by the two models are very similar hence both miss some flow characteristic that cannot be predicted by the 2D simulation. The mass-averaged exit flow angle in the experiment was $ang{9.25}$, the results from the fluent reports are showed below. subsubsection{Velocity profiles} The velocity profiles, normalised with the inlet velocity and the blade chord, at station 7,8 and 9 have are presented.\ At station 7 the curves are almost identical, the velocity evolves from zero in contact with the wall and then increases over the reference speed of 73.56 m/s. At station 8 and 9 both the experimental and the K-$omega$ SST present a reverse flow close to the wall, evidence of the separation. At station 8 and 9 the experimental reverse flow reaches 0.06 (7.6mm) and 0.1 (12.7 mm) of the blade chord that is in agreement with the results of the K-$omega$ SST model. The K-$varepsilon$ RNG fails to capture the reverse flow (only a negligible portion on at station 9). This is in accordance with the theory: the K-$omega$ SST model has better performance in-handling non equilibrium boundary layer regions, like those close to separation cite{rif4}. subsubsection{Loss coefficient} According to cite{rif3} the loss coefficient is defined by: $$ The table below presents the values calculated for the two different models. The figures have been taken from the Fluent reports in term of mass-weight average. The loss coefficient found in the experiments is 0.029. k-$omega$ SSTÂÂ   k-$varepsilon$ RNG Total pressure inlet $bar{p}_{01}$ [Pa] 2290 2209 Total pressure outlet $bar{p}_{02}$ [Pa] 2176 2103 Static pressure inlet $p_1$ [Pa] -1048 -1107 Loss coefficient $omega$ 0.034 0.031 The two coefficients are of the same order of magnitude to the one determined experimentally. The slightly difference could be explained by the different reference sections used for the mass-weight average in the experiment (upper and lower transverse slot for the experiment, see figure 1) since the inlet and the outlet have a different position. Moreover the lightly larger value obtained from the K-$omega$ SST compared to the K-$varepsilon$ RNG is consistent with the greater separation, hence more dissipation of energy, predicted by the model. section{Conclusions} In this assignment a CFD simulation using Icem and Fuent software has been carried out and the results have been analysed with engineering judgement, in order to demonstrate the understanding of the theory and the tools.\ The achievement of satisfying results is strictly related to successful implementation of every single steps of the simulation. The knowledge of the aerodynamics and the physics of the problem is paramount to set the mesh, the boundary conditions and the calculation.\ Great attention has been taken on the mesh generation and it resulted to be the most challenging part since a lot of experience is needed to have good results. The key aspects taken into account are the they grid domain extension, the grid type, the alignment with the flow, aspect ratio and skewness. The choice of the wall treatment influences the first node position. To make a comparison between two turbulence models, for the K-$omega$ SST has been used $y^+=1$ while for the K-$varepsilon$ RNG that uses a standard wall function $y^+=25$. When the mesh has an adequate quality is ready for the simulation. The choice of the turbulence model and the boundary conditions depend on the problem studied and should represent the physic of the problem as precise as possible. Once the simulation has been run the control of the convergence is the necessary but not the sufficient condition to obtain exact outcomes. Indeed the calculation can converge to wrong results if the problem is not well posed. Some modifications have been made to the mesh in order to attain more precision and the calculation has been repeated several times, lots of experience is requested to reduce the number of attempts.\ A qualitative and quantitative comparison with experimental results showed both accuracy and limitations of the simulation. Certainly the mesh can be improved, for example using more then nine blocks, to promote the skewness and the aspect ratio, particularly near the leading and the trailing edge. From the comparison between the K-$omega$ SST and the K-$varepsilon$ RNG the limitations of the latter in the unstable boundary layer treatment have been highlighted.\ The discrepancies observed can be addressed to the 3D effect not captured by the simulation and the limitations of the models adopted. The adoption on more sophisticated models such as the Transition SST (4 equations) and the Reynolds stress (5 equations) can improve the accuracy.

Wednesday, November 13, 2019

Sweet Diamond Dust :: essays research papers

Chapter IV focuses on the presence of the Americans in Puerto Rico during the early part of the twentieth century and their subsequent development of the sugarcane industry there. During this time, the United States military occupied Puerto Rico. Due to this occupation, the native islanders were affected in numerous ways and were looked down upon by the Americans. The Americans viewed the natives as incompetent and unable to be trusted. Many new American banks were popping up in Guamani that were reluctant to finance island run mills, but were giving money to the American run mills: "A number of powerful banks from the north had recently opened branches in Guamani†¦These banks, however, found no difficulty in financing the new sugar corporations that had recently arrived in town, but mistrusted island initiative" (26). The opening and inauguration of the Snow White Mills, "†¦the ultramodern refining complex the newcomers (Americans) had been building from months on the valley," (28) was of major significance in this chapter. Don Julio was strong-willed and vowed that he would not sell any of his land and "share the same fate" as the other local sugar mills. It was rumored that the Americans had declared a cessation of hostilities in the sugar mills war, and were now willing to aid the criollo hacienda workers. This was his opportunity to mingle and discuss his plans with the owners of Snow White Mills. When Don Julio arrived at the fair grounds, he made his way over to Mr. Durham and Mr. Irving, the president of the mills and the president of the sponsoring bank National City Bank, respectively. These two Americans saw the US victory as a major step towards modernizing for the US and for Puerto Rico: "’Twenty years ago it brought you freedom and order; this times it’s bringing you our nation’s progress. Thanks to that army out there your island is being inaugurated today in to the modern age," (32) said Mr. Durham speaking of the army that was present at the festivities. Don Julio was disturbed and offended by this comment. Mr. Irving said that the progress of the new century belongs to Americans and the progress of the past belongs to the Spanish. Yet again, showing how the Americans look down upon the native peoples. He then proposed his deal to the two Americans; he would sell them some of his cane fields, if they would lend him the money to ‘modernize’ his own mill.

Monday, November 11, 2019

Cry Freedom Essay

Cry Freedom was a movie that took place in South Africa in the 1970’s. It is a movie about a journalist, Donald Woods, and a black activist, Steve Biko. While Woods was around Biko reporting what was happening, Biko invited Woods to go see one of the impoverished black township so he could see where black people in South Africa lived. When they arrived, Woods was shocked. The black people of South Africa were living in terribly poor conditions due to the government imposed restrictions on their lives. Woods realizes how wrong the government is by putting these restrictions in place and begins to agree with Biko and his beliefs. Biko was a very outspoken activist for the rights of the black people in South Africa. The government had already banned him from leaving King William’s Town, his hometown, due to his past efforts for the cause. Latter on in the movie, Biko ends up getting arrested after a political speech which is outside of the area in which Biko is supposed to stay banned to. After being arrested, Biko is beaten to death. Since Woods had been reporting on the story, him and Biko had become good friends. After the death of his friend, Woods decided to work to expose the government’s part in the beating of Biko. After meeting with the South African Minister of Justice, Woods is banned by the government just as Biko was when the movie began. After being banned, Woods and his family are targeted and harassed by the government. Woods manages to escape the country of Lesotho disguised as a priest and the rest of his family joins him latter on. Woods escapes to Botswana with the help of an Australian journalist. Cry Freedom really shows us the issues of South Africa from the past. Black people from South Africa were severely discriminated against and were forced to live in terrible conditions. These terrible conditions were forced upon the black community by the government. This was the time of the apartheid system, so the government was the cause of much of the discrimination of the black people of South Africa. The movie really shows us the true face of the government. We see how the government was behind the terrible things that happened to black people during that time. Not only did the government  support this discrimination, but it also went as far as killing black people who were trying to speak out for their rights, just as they did to Biko. Cry Freedom shows us how horrible the government actually was in South Africa during the apartheid.

Saturday, November 9, 2019

An Introduction to the Beatles essays

An Introduction to the Beatles essays My experience with the Beatles has likely been very different than that of most people, especially avid Beatles enthusiasts I have met this semester. John, Paul, George, and Ringo first arrived in the United States on February 7, 1964. Since the moment they landed at JFK Airport, they began feeling the love from fans eagerly awaiting their arrival. The stage was set for Beatle-mania to take hold in the U.S., and it sure did. The Beatles were embraced by the entire country, and the rest is history. Just five years prior to the bands arrival in New York City, however, communist dictator Fidel Castro overthrew Fulgencio Batistas administration and took power in Cuba. He established the first communist regime in the Western hemisphere, and under his rule thousands of Cubans were removed from their homes and held as political prisoners for speaking out against his oppressive system. My grandparents were among these oppressed citizens, and right around the time that the Beatles arrived i n New York, my grandparents left Cuba and fled to the United States. They knew nothing of the Beatles, and they barely spoke English as it was. By the time they were able to establish themselves in the United States and have kids, it was the 70s. As a result, my parents were more influenced by artists of the 80s like Madonna and Prince while they were growing up. Of course they had heard of the Beatles, but the British band was for the most part before their time. In turn, the Beatles also had very little influence in my life, and I didnt even hear about them until I was in high school. By the time I started my freshman year at the University of Florida, I was well aware of the fact that the Beatles were one of the greatest and most influential bands in history. I just didnt understand why. I had listened to a few of their songs here and there, but I couldnt see what made them so great and so famo...

Wednesday, November 6, 2019

Africa- a Look from A White mans Binoculars essays

Africa- a Look from A White mans Binoculars essays It is the God given duty of the white men to civilize and christianize the primitive, under privileged and uncivilized population of the rest of the world; this is a very common phrase used in the history books to explain the European intervention into other continents and island nations. The African continent was also a victim of British conquerance but the British called it its protectorate. Did they really perform their duty or were they there to exploit the resources? There have been a lot of explanations about how Africa is perceived by the rest of the world, let alone the western world. To account for the whole world will be unrealistic and unrelated to this assignment so I will just focus on the western world. Africa has always been associated with words such as primitive, barbaric, savage and uncivilized. The negative portrayal has largely been a result of how western media covers the African news. Africa has always been referred to as a Dark Continent. The history of Africa and its people is depicted in the western world as nothing but a self proclaimed tribal owned land, which had a lot of wealth that the primitive people had no idea how to utilize. Similar had been the history of other countries prior to European intervention, for example American Samoa, Fiji Islands, New Zealand and Australia; but the image of these countries is quite favorable. According to ABCs Ted Koppel, half a million Ethiopians dying doesnt provoke the same response as would the deaths of half a million Italians (Hultman). Maybe Americans and Europeans are more concerned with countries where their economic interests lie. The colonists went to Africa, exploited their resources, took slaves and came back to their countries; but for the Africans the history of their country is not so simple. Why is there a separation between blacks and...

Monday, November 4, 2019

Article the star response Essay Example | Topics and Well Written Essays - 500 words

Article the star response - Essay Example Very much like these church furnishings, he was as inconspicuous as the window fixtures, his skin pallid as the walls and his face topped with a few wisps of hair was left as blank as the faces of the stone-cold saints by the deteriorating disease that appears to have drained the life out of him even before his time was up. One Sunday I saw him and I said to myself, â€Å"This guy’s definitely a saint’s buddy, I bet his prayers go straight up to heaven.† That Sunday, he was strangely paler than his usual pallor and he was not walking; he was painstakingly dragging himself towards his favorite saint. I never saw him again after that Sunday. On yet another Sunday, curious on what happened to the man, I asked one of the church regulars on the guy’s whereabouts. I have learned he had died the night of the last Sunday I saw him. I never found out what disease he had but from the looks of it he may have had a cancer of some sort. Whatever condition he may have had, what happened to the man had struck questions and doubts in my mind. Why was he denied of the miracle he had prayed for almost everyday? Was the saint, his buddy, too busy to hear out his sole supplication? Was it too much to ask for him to be eased of that agonizing pain that caused him to drag his feet just to go to church? Yes. What happened to that man had caused an immense blow on my faith not on God but on the saints I take little notice of at church and I reiterate, my faith on God did not falter but doubts on these marble statues at church had launched a massive attack on my belief on what the church had introduced as ‘saints’. I stand by the basic principle that God can never be cruel and would never give false hope to Men. These thoughts clouded my mind and covered my ears which caused me to not hear what was said during the service. The service ended and I remained sitting still, oblivious to the faint bustle of the leaving churchgoers. As I came to my senses, I

Saturday, November 2, 2019

Business Process Management in Systems Integration Literature review

Business Process Management in Systems Integration - Literature review Example Actually, business process management is a division of infrastructure management, which is a managerial field aimed at maintaining and enhancing a company’s equipment and fundamental tasks (TechTarget, 2005), (Orbis Software Ltd., 2011) and (KnowledgeHills, 2011). This will discuss some of the important aspects of business process management and its role in system integration. Business Systems Integration Various researches conducted by industry analysts revealed that companies are investing 20% to 30% of their IT budget in integrating systems and applications, while they would need to suppose that the systems they spend in are interoperable. However, the circumstances are hard to defend and customers are trying to clarifying this point to their business and systems integration partners. On the other hand, with the passage of time, the process of systems integration is becoming more and more difficult. Additionally, the client’s integration developments at the present t urn into the value chain, which result in n-factorial positions of integration between applications owned by various corporations. Moreover, majority of businesses at the present do not depend completely on a single application image, for instance ERP. In this scenario, the integration projects expand over legacy systems, already available ERP and new purchases (Smith, 2002). Types of BPM At the present, there exist 3 different kinds of BPM frameworks marketplace. In this scenario, first one is horizontal frameworks, which focuses on design and development of business procedures and normally pay more attention on technology and reuse. Second type is vertical BPM arrangements, which focus on a precise group of synchronized jobs as well as include pre-built templates that could be configured and organized according to business needs and requirements. Lastly, the full-service BPM arrangements encompass five fundamental components that are outlined below: (TechTarget, 2005), (Businessba lls, 2009) and (Owen & Raj, 2011) Process discovery and project scoping Process modeling and design Business rules engine Workflow engine Simulation and testing Furthermore, more and more businesses are now adopting on-premise business process management (BPM) as a standard, since progresses in cloud computing have directed to augmented interest in on-demand, software as a service (SaaS) based capabilities and services (TechTarget, 2005). Business process management in Systems integration Enterprise System Integration (ESI) is the standard long-lasting development, merger and incorporation of a lot of advanced computing science, enterprise-wide management and networking fields comprising enterprise application integration, electronic business process management, self-defining meta-data repository, (information sharing and disambiguation), enterprise architecture, etc. In addition, enterprise system integration identifies and helps get rid of following main factors that are outlined below: (Hartweg, 2007) Unnecessary redundancy Duplication of effort Reinventing the wheel Moreover, it also helps reduce the mismatched and unrelated enterprise elements all through the isolated compartmentalized departments that are short of coordination of procedures or systems, through independent development and maintenance budgets (Hartweg, 2007). Use of Process Automation Sometimes processes are considered as a new development model that can eventually move the object