英文翻译mobilerobot

2023-07-02

第一篇:英文翻译mobilerobot

英文翻译

他的研究结果并不明朗,往往揭示出一个微弱的甚至是消极的关于留存收益的FDI与税务的反映机制。这项研究明显对哈特曼模型投出了质疑,但一直没有大的动作尝试用更好的数据或方法进行重新估计。

文献中确实体现了Slemrod(1990)的想法,他认为处理双重征税的政策可能会影响税收反应。常见的区别是不对母国以外的收入进行征税的国家之间,免征国外赚来的收入的纳税义务,以及对母公司潜在的应纳税额收取的全球性税务,但可能会采取多种方式来处理国外收入来避免跨国公司的双重征税。处理双重征税的问题有两个标准的方法,为本国提供信贷或者扣除由跨国公司取得的国外纳税收益。

当研究人员开始研究1986年美国税制改革对于在美国的外来直接投资的显著性影响时,这些税务处理办法对于影响FDI和税收分析的潜力,在研究文献中起到了至关重要的作用。斯科尔斯和欧胜(1990)推测,当美国的税率增加,美国在全球范围的跨国公司所产生的FDI可能会增加!这看似有悖常理的概念来自于认识到了信用体系,例如,跨国公司不会在全球征税系统下看到应纳税额有所增加。在另一方面,美国国内投资者(属地税制下的跨国公司)将承担增加的美国税项负债的全面冲击。随着公司在美国投标了相同的的资产,全球税收跨国公司将会变得有利并且投入更多。尽管斯科尔斯和欧胜(1990)用很简单的统计检验表明,美国外国直接投资自1986年以后的上升没有受到其它因素的控制,但是斯文森(1994年)对斯科尔斯和欧胜的假设做了更仔细的检查,通过对1986年的改革对不同产业FDI产生的不同影响进行检测,这些产业在改革后税率都发生了变化。具体来说,斯文森检查了从1979年到1991年行业面板数据,利用从1986年的税制改革后行业税率的变化,发现FDI确实随着更大的平均税率增加而增加,尤其是全球征税的国家。斯文森的研究一个令人担忧的问题是,他确定了斯科尔斯和欧胜采用平均税率数据进行检测时结果是接受假设,但使用实际税率则是拒绝原假设。奥尔巴赫和哈塞特(1993)提供了推翻斯科尔斯和欧胜假设的进一步证据,他们通过开发FDI模型来预测美国的投资类型,税制改革应该对其进行支持为了使属地纳税的跨国公司对抗全球纳税的跨国公司。特别是,他们的模型显示,属地税收跨国公司应该将更多的精力放在激励合并与收购(M&A)的外国直接投资,而全球税收跨国公司应该已经从这样的FDI投资新设备的情况下气馁。这些数据似乎表明,1986年美国税法改革后外商直接投资的大幅增加,是由于跨国公司从全球税收国家(主要是日本和英国)推进对外直接投资。

因此,在很多方面,1986年的税制改革对FDI的影响到今天都是一个开放性问题。然而,尽管这个特定的问题如今已经有些过时了,但是给母公司提供信用的全球税收制度国的FDI应该相对而言对税率不那么敏感这个话题仍然是持续的热点。海恩斯表示(1996年)对它做出了最好的表达,根据现存文献检测了是否国家级税会影响美国内区域的FDI,从而创造性地提出了属地纳税对抗全球纳税的处理办法。以往的研究探讨国家税收对于国家FDI的分布的影响,结果并不明显(参见例如,考夫林,Terza和Arromdee,1991)。像联邦税,跨国公司根据他们在母国面临属地纳税还是全球纳税可能会对国家级税有不同的反应。海因斯(1996)的实证策略是调查FDI在美国各州的分布,并且研究比较“非信用体系”的状态下的外国投资者相对于“信用系统”下的外资投资者FDI的税收敏感度。他发现,随着税率高1%,非信用体系投资者FDI的减少比信用体系投资者FDI的减少多 9%。

综上所述,文献精细的指出了当考虑到税收对FDI的影响时,资料多的用不了。跨国公司在母国和东道国都遭遇了大量不同级别的税率和处理双重征税的政策,这能极大的改变税收对鼓励跨国公司投资的影响。正如上文所提到的,经验方法和数据样本相差甚远,因此税(以及在美国1986年的税收改革之类的)在多大程度上影响FDI仍然是显著的问题。证据似乎更有力的说明了,一个跨国公司在客国申报纳税而形成的处理国外税的信用系统是相对无关紧要的。

文献中还存在其他的弱点需要解决。首先,所有上述考察(最好)产业层面的数据模型的研究通常是属于公司层面的活动。这可以通过依靠理论来解释经验证据来制造问题。这方面最明显的例子是使用平均税率为利率变量,这在变量设置上是一个完全的错误。平均或实际税率是否作为税务义务的的测度很少被讨论,但正如作为例证的斯文森(1994)研究所说,它对FDI会产生十分不同的影响。

文献也只是最近才开始研究除企业所得税以外的其他税。例如,德赛,弗利和海因斯(2004)最近的工作论文提出的证据表明,间接营业税对FDI的影响与企业所得税对FDI的影响是一定范围内是相同的。与此类似,双边国际税收协定对FDI的影响直到最近仍然是未被开发的实证课题。关于谈判减少国家在其他事项上的代扣所得税的税收协定有成千上万个。沃德-Dreimeier(2003年)和Blonigen和Davies(2004)发现没有证据表明这些条约在任何显著的方面影响FDI的活动. 3.3机构

机构的质量有可能是影响FDI活动的一个重要的决定因素,特别是对于由于各种原因而较不发达的国家。首先对资产欠缺法律保护增加了企业进行投资的资产不太可能征收的机会。机构质量差需要运作良好的市场(和/或贪污),这就增加了做生意的成本,也就减少了FDI的活动。最后,在一定程度上机构质量差导致基础设施差(即公共产品),当FDI确实进入市场则预计盈利能力会下降。

尽管这些基本假设是无争议的,但估计机构对外国直接投资的影响幅度是困难的,因为没有任何准确的机构测量。大部分措施是一个国家的政治,法律和一些经济体制的复合指数,根据官员或熟悉国家机制的商人的调查结果制定。由于受访者来自不同的国家导致国家间的可比性值得商榷。另外,机构通常会保持长久,因此随着时间的推移在一个国家内很少会发生有意义的变化。

由于这些原因,尽管跨国FDI研究通常包括机构以及或者贪污腐败的措施,但并不经常把它作为分析的重点。魏的论文(2000年;2000B)是一个例外,表明各种腐败指数与FDI有着强烈的负相关性,但其他的研究中没有发现这样的证据(如惠勒和么,1992年)。海因斯(1995)提供了一个有趣的“自然实验”的方法,通过研究1977年的美国反海外腐败法中关于处罚美国跨国公司贿赂外国官员的规定。他估计在法案实行的之后一段时间后会发现该法案对FDI的负面效应。这种自然实验分析为将来提出更令人信服的证据带来了希望,尽管发现这样的天然实验是非常困难的。 3.4 贸易保护

外国直接投资和贸易保护之间的假设联系被大多数贸易经济学家看作是还算明朗的。即高贸易保护应该让企业更有可能替代子公司生产出口以避免贸易生产成本。这通常被称为关税跳跃投资。也许是因为这个理论相当简单而且寻常,一般情况下很少有研究专门检验这一假设。另一个可能的原因是数据驱动。在各行业间一致的非关税保护形式很难进行量化。许多企业层面的研究采用产业级别的措施来控制各种贸易保护方案,但往往结果好坏参半,其中包括Grubert和Mutti(1991),科格特和张(1996),和Blonigen(1997)。一种替代产业阶层措施是通过反倾销给企业提供特定的相当大的反倾销税来实现的。公司面临着要使用更加精确的措施来应对法律保护,Belderbos(1997)和Blonigen(2002)均发现关税跳跃FDI更有力的证据,尽管Blonigen的分析结果强烈暗示出这种反应只有总部设在发达国家的跨国公司才能看到。这可能是关税跳跃贸易保护与其他措施混合的另一个原因,外商直接投资需要大量的费用,很多小出口企业可能无法融资或寻找有利可图的方面。事实上,贸易保护可以明确地针对FDI较少的进口来源地。这表明外国直接投资和贸易保护可能是内源性的,关于这一问题几乎没有被实证过。有一个例外是Blonigen和FIGLIO(1998年),他们发现的证据表明,增加外国直接投资进入美国参议员的州或美国众议院的地区,会增加他们把票投给进一步的贸易保护的可能性。 3.5 贸易效应

先前讨论到这一点的局部均衡研究在很大程度上忽略了外商直接投资的贸易影响,而这与潜在的FDI 的变化拉动力有着密切的联系。可能最常引用FDI的动机是作为出口到东道国家的替代品。由于巴克利和卡森(1981)的模型所呈现出的,可以认为出口是固定成本较低,但运输和贸易壁垒可变成本较高。与子公司服务于同一市场引入FDI允许大幅降低这些可变成本,但可能涉及更高的大于出口成本的固定成本。这表明了一个自然发展规律,一旦国外市场对跨国公司的产品需求达到足够大的规模,那么将会从出口发展到FDI。

在早期的论文中利普西和Weiss(1981;1984)对美国对东道国的FDI 以及出口进行回归分析,发现了一个正相关性,这违背了FDI替代出口的原理。然而,这些论文忽略了东道国市场这个变量的内生性,这一变量可以将跨国公司引进FDI以及出口产品的意愿朝着相同的方向增加或者减少。Grubert和Mutti(1991)根据出口销售,使用了利普西和Weiss(1981)类似的数据得到了负相关的回归结果,尽管结果在统计学上是不显著的。

Blonigen(2001)认为,问题在于贸易流量,要么是用最终产品替代了跨国公司的分支机构在同一国家生产产品,要么是将生产最终产品所使用的中间产品作为流量统计。前一种情况导致“贸易”和“外国直接投资”之间的负相关关系,而后者显示两者之间的正相关关系。 Blonigen使用日本出口美国的10位制的关税协调制度下产品级别的贸易和FDI数据,结果显示日本对美国的FDI增加引起日本生产这些产品的中间产品的出口也增加了,但是最终产品的出口却下降了。赫德和里斯(2001)和斯文森(2004)分别使用日本企业层面的数据以及美国行业层面的数据提出了类似的证据。

上述讨论中一个潜在的问题是企业之间的关系(如投入到分销商的供应商),有可能影响FDI决策。日本企业往往在供应商和分销商间有更正式和公开的连接,所谓纵向联系财阀。赫德,里斯和斯文森(1995)探讨了是否其他日本企业在美国的一个州或邻近州的选址会通过企业相似的纵向联系财阀影响随后日本跨国公司的FDI。他们发现确实是这样,特别对于汽车部门来说,并且把这个作为有正式的供应商-经销商关系的企业之间产生了聚集经济的证据。

其他研究考虑了横向联系财阀对日本FDI的影响。横向联系财阀是在许多行业间企业集团的分组,但中心是围绕日本大型银行。这样的活动对FDI的三大潜在影响已经被提出。主要的潜在影响是利用横向联系财阀的银行作为低成本集资的来源,这将增加公司的整体投资,包括对外投资。正如霍什、卡什亚普和Scharfstein(1991)提到的,有一个财阀银行成员这样的关系可以降低监督成本,降低资金成本。他们对日本制造业企业进行分析发现的证据表明,比起其他企业这些横向联系财阀的公司在投资活动中比较少受限。随后的研究中检测了是否横向联系财阀增加了日本企业的对外直接投资,但往往结果很不显著或敏感(例如,见Belderbos和Sleugwaegen,1996)。

Blonigen,埃利斯和福斯滕(2005年)注意到了横向联系财阀的另一个可能的影响俄林模型的精确检验公式的形成遭遇了巨大的障碍,当有多于两个国家和地区的两个以上的生产要素时该模型预测贸易流量则可能有较大的不确定性。

第二篇:英文翻译

无 锡 职 业 技 术

毕 业 设 计 说 明 书(英 文 翻 译)

原文:

译文:(小四号、宋体)

5

第三篇:英文翻译

人工智能埃丹:工程设计,分析和制作的人工智能

很多公司企图提高定制当今竞争激烈的全球市场,都利用产品系列和平台基础来开发产品的种类、缩短交易时间和增加收入。一个成功的产品系列的关键是这个产品平台通过添加,删除,或用一个或多个模块用一个或者多个维度的具体目标利基市场来得到产品平台或者扩展产品平台。这项初步工程设计领域在过去十年中迅速成熟,本文提供了一个研究活动发生在这段时间进行基于大批量定制的产品开发平台产品族系列综合评述的设计。用一种产品系列来评估产品平台杠杆策略的技术通常用于审查和度量评估产品平台和产品系列的有效性。特别强调的是优化方法和人工智能技术协助的产品系列设计方法和基于平台的产品开发。基于web的产品平台定制系统仍需要讨论。学术界和工业界的例子就是之前在前文中强调了的产品系列和产品平台的好处。本文讨论的结论是运用潜在的研究领域帮助构建产品系列的制造和设计之间的桥梁。

当今竞争激烈的全球市场重新定义着许多公司做生意的方式。这种新形式的竞争优势是大规模定制,而且正如pine说的“一种新的查看竞争业务的方式,这种不以牺牲效率,有效性和花费的方式进行识别和实现个人客户最主要的需要和需求。”在pine对大规模定制的开创性探索中,他认为“消费者可能已经不再被这个巨大的同质市场聚集在一起,但是个人的需求可能与众不同并且这些需求是可以确定和满足的。”他把越来越多的注意力放在产品的种类和客户来满足市场的饱和度和提高客户满意度的需求:新产品必须不同于市场上现有的产品,并且尽可能的满足客户的需求。作为市场产生的动力,从汽车行业研究的和实证调查制造公司确认的这些结果来看,佩特森和犹梅里补充说明“正如许多公司知道的那样全球化市场的兴起已经从根本上改变了竞争,那就是强制压缩产品的开发时间,努力增加产品的种类。”类似的主题被沃克曼贯穿了这篇文章(1997),他在欧洲为“客户驱动”市场研究行业的反应。

社会的发展、技术的进步、产品的更新、生活节奏的加快等等一系列的社会与物质的因素,使人们在享受物质生活的同时,更加注重产品在“方便”、“舒适”、“可靠”、“价值”、“安全”和“效率”等方面的评价,也就是在产品设计中常提到的人性化设计问题。

所谓人性化产品,就是包含人机工程的的产品,只要是“人”所使用的产品,都应在人机工程上加以考虑,产品的造型与人机工程无疑是结合在一起的。我们可以将它们描述为:以心理为圆心,生理为半径,用以建立人与物(产品)之间和谐关系的方式,最大限度地挖掘人的潜能,综合平衡地使用人的肌能,保护人体健康,从而提高生产率。仅从工业设计这一范畴来看,大至宇航系统、城市规划、建筑设施、自动化工厂、机械设备、交通工具,小至家具、服装、文具以及盆、杯、碗筷之类各种生产与生活所创造的“物”,在设计和制造时都必须把“人的因素”作为一个重要的条件来考虑。若将产品类别区分为专业用品和一般用品的话,专业用品在人机工程上则会有更多的考虑,它比较偏重于生理学的层面;而一般性产品则必须兼顾心理层面的问题,需要更多的符合美学及潮流的设计,也就是应以产品人性化的需求为主。

人机工程学是一门新兴的边缘科学。它起源于欧洲,形成和发展于美国。人机工程学在欧洲称为Ergonomics,这名称最早是由波兰学者雅斯特莱鲍夫斯基提出来的,它是由两个希腊词根组成的。“ergo”的意思是“出力、工作”,“nomics”表示“规律、法则”的意思,因此,Ergonomics的含义也就是“人出力的规律”或“人工作的规律”,也就是说,这门学科是研究人在生产或操作过程中合理地、适度地劳动和用力的规律问题。人机工程学在美国称为“Human Engineering”(人类工程学)或“Human Factor Engineering”(人类因素工程学)。日本称为“人间工学”,或采用欧洲的名称,音译为“Ergonomics”,俄文音译名“Эргнотика”在我国,所用名称也各不相, ,有“人类工程学”、“人体工程学”、“工效学”、“机器设备利用学”和“人机工程学”等。为便于学科发展,统一名称很有必要,现在大部分人称其为“人机工程学”,简称“人机学”。“人机工程学”的确切定义是,把人—机—环境系统作为研究的基本对象,运用生理学、心理学和其它有关学科知识,根据人和机器的条件和特点,合理分配人和机器承担的操作职能,并使之相互适应,从而为人创造出舒适和安全的工作环境,使工效达到最优的一门综合性学科。

参考文献

[1]鲍德温,C.Y.,和克拉克,K.B. (2000年)。设计规则:第1卷的力量的模块化。马萨诸塞州剑桥:麻省理工学院出版社。

[2]贝尔蒂,S.,杰尔马尼,M.,Mandorli,楼与奥托,何(2001)。设计产品系列安内的中小型比如输入-奖。第13届诠释。机密。工程设计(卡利,S.,达菲,A.,MCMA-汉,C.,&华莱士,K.,编着),英国格拉斯哥,页507-514。

[3]沃马克,J.P.,琼斯,D.T.,与鲁斯,D.(1990)。改变了机 世界。纽约:罗森联营公司。

[4]沃特曼,JC,Muntslag,DR,和TIMMERMANS,PJM编。 (1997)。客户驱动的制造。纽约:查普曼和霍尔。

[5]Yigit,AS,Ulsoy,AG&Allahverdi,A.(2002)。优化的模块化产品设计的可重构制造。中国智能制造13(4),309-316。

山东交通学院毕业设计(论文)

AI EDAM: Artificial Intelligence for Engineering Design, Analysis and Manufacturing

In aneffort to improve customization for today’s highly competitive global mar ketplace, many companies are utilizing product families and platform-based product development to increase variety, shorten lead times, and reduce costs. The key to a successful product family is the product platform from which it is derived either by adding, removing, or substituting one or more modules to the platform or by scaling the platform in one or more dimensions to target specific market niches. This nascent field of engineering design has matured rapidly in the past decade, and this paper provides a comprehensive review of the flurry of research activity that has occurred during that time to facilitate product family design and platform -based product de velopment for mass customization. Techniques for identifying platform leveraging strategies within a product family are reviewed along with metrics for assessing the effectiveness of product plat forms and product families. Special emphasis is placed on optimization approaches and artificial intelligence techniques to assist in the process of product family design and platform based product development. Web-basedsystems for product platform customization are also discussed.

Examples from both industry and a cademia are presented throughout the paper to highlight the benefits of product families and product platforms. The paper concludes with a discussion of potential areas of research to help bridge the gap between planning and managing families of products and designing and manufacturing them.Today’s h igh ly competitive global mark etplace is redefining the way many companies do business. The new form ofcompetitive advantage is mass customization, and is, as Pine~1993a, p. xiii! says, ―a new way of viewing business competition , one that makes the identification and fulfillment of the wants and needs of individual cu stomers paramount without sacrificing efficiency, effectiveness, and low costs.‖In his seminal text on mass customization, Pine ~1993a,p. 6! argues that ―customers can no longer be lumped together in a huge homo geneous market, but are individuals whose individual wants and needs can be ascertained and fulfilled.‖ He attributes the increasing attention on product variety and customer demand to the saturation of the market and the need to improve customer satisfaction: newproducts must be different from what is already in th e market and must meet cu stomer needs more comp letely. Sand-erson and Uzu meri ~1997, p. 3! add that ―the emergence of global markets has fundamentally altered competition as many firms have known it‖ with the resulting market dynamics ―forcing the compression of product development times and expansion of product variety.‖ Findings from studies of the automotive industry ~Womacketal., 1990; MacDuffieetal., 1996; Alford et al., 2000! and empirical su rveys of manufacturing firms ~Chinnaiah et al., 1998; Duray et al.,2000! confirm these trends. Similar themes pervade th e tex t by Wortmann et al. ~1997!, who examine industry’s response in Europe to the ―customer-driven‖ market.

The social development and technological progress, product updates, rhythm of life pace, and so on a series of social and physical factors, so that people in the enjoyment of material life at the same time, pay more attention to products in the "convenience", "comfortable", "reliable", "value", "security" and "efficiency evaluation, is often mentioned in the product design, personalized design.

山东交通学院毕业设计(论文)

The so-called user-friendly products that includes the man-machine engineering products, as long as it is "person" used in products should be considered in the man-machine engineering, product design and ergonomics is undoubtedly together. We can be described as: Center for psychological, physiological radius in order to build a harmonious relationship between people and things (products), maximum to tap potential, comprehensive balanced use of muscle, protection of human health, so as to improve the productivity. Only from the category of industrial design, to aerospace systems, urban planning, construction, automatic chemical plant, machinery and equipment, transport, small furniture, clothing, stationery and flower pots, cups, bowls and chopsticks, such as production and life to create the "objects", in the design and manufacture must take the "human factor" as an important condition to consider. If the products of distinguished professional supplies and general supplies and professional activities in the man-machine engineering will have more consideration, it is more emphasis on in physiology level; and general products must balance the psychological problems, need to be more in line with the aesthetics and the trend of the design, that is, should be to the needs of the product humanization. Ergonomics is a rising edge science. It originated in Europe, formed and developed in the United States.. Ergonomics in Europe known as ergonomics, the name of the first is by Poland scholar Ya J Tele Bo J J Kiti, it is composed of two Greek roots. The meaning of "ergo" is "output," and nomics said the meaning of "law, rule. Therefore, ergonomics meaning is" people contribute to the rules "or" working rules ". That is to say, the subject is the research in the production or operation of reasonable, moderate labor and force of law of. Ergonomics in the United States known as "Engineering Human" (Human Engineering) or "Factor Engineering Human" (Human Factor Engineering). Japan, known as the "human engineering", or the European name, transliterated as "Ergonomics", Russian transliteration names "middle, in this paper the author, with large K" in our country, with the name of a also varied, with "human engineering", "human engineering", "Ergonomics", "equipment using learning" and "Ergonomics". For the development of subject, it is necessary to unify the name

山东交通学院毕业设计(论文)

of the subject. Now most people call it ergonomics. ". The exact definition of "Ergonomics" is the humanenvironment system as the basic research object, the use of physiological science, psychology and other related disciplines of knowledge, according to the conditions and characteristics of man and machine, a reasonable allocation of human and machine bear operation functions, and mutual adaptation, in order to create the comfortable and safe working environment for people, to optimize the efficiency of a comprehensive discipline.

山东交通学院毕业设计(论文)

REFERENCES

[1]Baldwin, C.Y. , & Clark, K. B. ~2000!. Desig n Rules: Volume 1. Th e Power of Modularity. Camb ridge, MA: MIT Press. [2]Berti, S., Germani, M., Mandorli, F., & Otto, H.E. ~2001!. Design ofproduct families—An example within a small and medium sized enterprise. 13thInt.Conf. Engineering Design ~Culley, S., Duffy, A., McMahon , C., & Wallace, K.,Eds.!, Glasgow, UK, pp . 507–514.

[3]Womack, J.P., Jon es, D.T., & Roos, D. ~1990!. The Machine that ChangedtheWorld. New York: Rawson Associates.

[4]Wortmann, J.C., Muntslag, D.R., & Timmermans, P.J. M., Eds. ~1997!.Custo mer-Driven Manufacturing. New York: Chapman & Hall.

[5]Yigit, A.S., Ulsoy, A.G., & Allahverdi, A. ~2002!. Optimizing modular

product design for reconfigurable manufacturing. Journal of Intelligent Manufacturing 13(4), 309 –316.

第四篇:英文翻译

信息与控制工程学院毕业设计(论文)英文翻译

微机发展简史中英文文献翻译

摘要:微型计算机已经成为当代社会不可缺少的工作用品之一,在微型计算机的帮助下,人类能够完成以前难以完成的复杂的运算,加快了科技的发展进程。本篇讲述了微型计算机的起源以及其发展,包括硬件与软件的发展已经整个微型计算机行业的发展过程。 关键字:微型计算机;发展进程

第 1 页 共 13 页

信息与控制工程学院毕业设计(论文)英文翻译

1最早时候的计算机

第一台存储程序的计算开始出现于1950前后,它就是1949年夏天在剑桥大学,我们创造的延迟存储自动电子计算机(EDSAC)。

最初实验用的计算机是由像我一样有着广博知识的人构造的。我们在电子工程方面都有着丰富的经验,并且我们深信这些经验对我们大有裨益。后来,被证明是正确的,尽管我们也要学习很多新东西。最重要的是瞬态一定要小心应付,虽然它只会在电视机的荧幕上一起一个无害的闪光,但是在计算机上这将导致一系列的错误。

在电路的设计过程中,我们经常陷入两难的境地。举例来说,我可以使用真空二级管作为门电路,就像在EDSAC中一样,或者在两个栅格之间用带控制信号的五级管,这被广泛用于其他系统设计,这类的选择一直在持续着直到逻辑门电路开始应用。在计算机领域工作的人都应该记得TTL,ECL和CMOS,到目前为止,CMOS已经占据了主导地位。

在最初的几年,IEE(电子工程师协会)仍然由动力工程占据主导地位。为了让IEE 认识到无线工程和快速发展的电子工程并行发展是它自己的一项权利,我们不得不面对一些障碍。由于动力工程师们做事的方式与我们不同,我们也遇到了许多困难。让人有些愤怒的是,所有的IEE出版的论文都被期望以冗长的早期研究的陈述开头,无非是些在早期阶段由于没有太多经验而遇到的困难之类的陈述。

60年代初,个人英雄时代结束了,计算机真正引起了重视。世界上的计算机数量已经增加了许多,并且性能比以前更加可靠。这些我认为归因与高级语言的起步和第一个操作系统的诞生。分时系统开始起步,并且计算机图形学随之而来。

综上所述,晶体管开始代替真空管。这个变化对当时的工程师们是个不可回避的挑战。他们必须忘记他们熟悉的电路重新开始。只能说他们鼓起勇气接受了挑战,尽管这个转变并不会一帆风顺。

第 2 页 共 13 页

信息与控制工程学院毕业设计(论文)英文翻译

2计算机的发展

2.1小规模集成电路和小型机

很快,在一个硅片上可以放不止一个晶体管,由此集成电路诞生了。随着时间的推移,一个片子能够容纳的最大数量的晶体管或稍微少些的逻辑门和翻转门集成度达到了一个最大限度。由此出现了我们所知道7400系列微机。每个门电路或翻转电路是相互独立的并且有自己的引脚。他们可通过导线连接在一起,做成一个计算机或其他的东西。

这些芯片为制造一种新的计算机提供了可能。它被称为小型机。他比大型机稍逊,但功能强大,并且更能让人负担的起。一个商业部门或大学有能力拥有一台小型机而不是得到一台大型组织所需昂贵的大型机。

随着微机的开始流行并且功能的完善,世界急切获得它的计算能力但总是由于工业上不能规模供应和它可观的价格而受到挫折。微机的出现解决了这个局面。

计算消耗的下降并非起源与微机,它本来就应该是那个样子。这就是我在概要中提到的“通货膨胀”在计算机工业中走上了歧途之说。随着时间的推移,人们比他们付出的金钱得到的更多。

2.2硬件的研究

我所描述的时代对于从事计算机硬件研究的人们是令人惊奇的时代。7400系列的用户能够工作在逻辑门和开关级别并且芯片的集成度可靠性比单独晶体管高很多。大学或各地的研究者,可以充分发挥他们的想象力构造任何微机可以连接的数字设备。在剑桥大学实验室力,我们构造了CAP,一个有令人惊奇逻辑能力的微机。

7400在70年代中期还不断发展壮大,并且被宽带局域网的先驱组织Cambridge Ring所采用。令牌环设计研究的发表先于以太网。在这两种系统出现之前,人们大多满足于基于电报交换机的本地局域网。

令牌环网需要高可靠性,由于脉冲在令牌环中传递,他们必须不断的被放大并且再生。是7400的高可靠性给了我们勇气,使得我们着手Cambridge Ring.项目。

第 3 页 共 13 页

信息与控制工程学院毕业设计(论文)英文翻译

2.3更小晶体管的出现

集成度还在不断增加,这是通过缩小原始晶体管以致可以更容易放在一个片子上。进一步说,物理学的定律占在了制造商的一方。晶体管变的更快,更简单,更小。因此,同时导致了更高的集成度和速度。

这有个更明显的优势。芯片被放在硅片上,称为晶片。每一个晶片拥有很大数量的独立芯片,他们被同时加工然后分离。因为缩小以致在每块晶片上有了更多的芯片,所以每块芯片的价格下降了。

单元价格下降对于计算机工业是重要的,因为,如果最新的芯片性能和以前一样但价格更便宜,就没有理由继续提供老产品,至少不应该无限期提供。对于整个市场只需一种产品。

然而,详细计算各项消耗,随着芯片小到一定程度,为了继续保持产品的优势,移到一个更大的圆晶片上是十分必要的。尺寸的不断增加使的圆晶片不再是很小的东西了。最初,圆晶片直径上只有1到2英寸,到2000年已经达到了12英寸。起初,我不太明白,芯片的缩小导致了一系列的问题,工业上应该在制造更大的圆晶片上遇到更多的问题。现在,我明白了,单元消耗的减少在工业上和在一个芯片上增加电子晶体管的数量是同等重要的,并且,在风险中增加圆晶片厂的投资被证明是正确的。

集成度被特殊的尺寸所衡量,对于特定的技术,它是用在一块高密度芯片上导线间距离的一半来衡量的。目前,90纳米的晶片正在被建成。

2.4单片机

芯片每次的缩小,芯片数量将减少;并且芯片间的导线也随之减少。这导致了整体速度的下降,因为信号在各个芯片间的传输时间变长了。

渐渐地,芯片的收缩到只剩下处理器部分,缓存都被放在了一个单独的片子上。这使得工作站被建成拥有当代小型机一样的性能,结果搬倒了小型机绝对的基石。正如我们所知道的,这对于计算机工业和从事计算机事业的人产生了深远的影响

自从上述时代的开始,高密度CMOS硅芯片成为主导。随着芯片的缩小技术的发展,数百万的晶体管可以放在一个单独的片子上,相应的速度也成比例的增加。

为了得到额外的速度。处理器设计者开始对新的体系构架进行实验。一次成功的实验都预言了一种新的编程方式的分支的诞生。我对此取得的成功感到非常惊奇。它

第 4 页 共 13 页

信息与控制工程学院毕业设计(论文)英文翻译

导致了程序执行速度的增加并且其相应的框架。

同样令人惊奇的是,通过更高级的特性建立一种单片机是有可能的。例如,为IBM Model 91开发的新特性,现在在单片机上也出现了。

Murphy定律仍然在中止的状态。它不再适用于使用小规模集成芯片设计实验用的计算机,例如7400系列。想在电路级上做硬件研究的人们没有别的选择除了设计芯片并且找到实现它的办法。一段时间内,这样是可能的,但是并不容易。

不幸的是,制造芯片的花费有了戏剧性的增长,主要原因是制造芯片过程中电路印刷版制作成本的增加。因此,为制作芯片技术追加资金变的十分困难,这是当前引起人们关注的原因。

2.5半导体前景规划

对于以上提到的各个方面,在部分国际半导体工业部门的精诚合作下,广泛的研究与开发工作是可行的。

在以前美国反垄断法禁止这种行为。但是在1980年,该法律发生了很大变化。预竞争概念被引进了该法律。各个公司现在可以在预言竞争阶段展开合作,然后在规则允许的情况下继续开发各自的产品。

在半导体工业中,预竞争研究的管理机构是半导体工业协会。1972年作为美国国内的组织,1998年成为一个世界性的组织。任何一个研究组织都可加入该协会。

每两年,SIA修订一次ITRS(国际半导体科学规划),并且逐年更新。1994年在第一卷中引入了“前景规划”一词,该卷由两个报告组成,写于1992年,在1993年提交。它被认为是该规划的真正开始。

为了推动半导体工业的向前发展,后续的规划提供最好的可利用的工业标准。它们对于15年内的发展做出了详细的规划。要达到的目标是每18个月晶体管的集成度增加一倍,同时每块芯片的价格下降一半,即Moore定律。

对于某些方面,前面的道路是清楚的。在另一方面,制造业的问题是可以预见的并且解决的办法也是可以知道的,尽管不是所有的问题都能够解决。这样的领域在表格中由蓝色表示,同时没有解决办法的,加以红色。红色区域往往称为红色砖墙。

规划建立的目标是现实的,同时也是充满挑战的。半导体工业整体上的进步于该规划密不可分。这是个令人惊讶的成就,它可以说是合作和竞争共同的价值。

值得注意的是,促进半导体工业向前发展的主要的战略决策是相对开放的预竞争

第 5 页 共 13 页

信息与控制工程学院毕业设计(论文)英文翻译

机制,而不是闭关锁国。这也包括大规模圆晶片取得进展的原因。

1995年前,我开始感觉到,如果达到了不可能使得晶体管体积更小的临界点时,将发生什么。怀着这样的疑惑,我访问了位于华盛顿的ARPA(美国国防部)指挥总部,在那,我看到1994年规划的复本。我恍然大悟,当圆晶片尺寸在2007年达到100纳米时,将出现严重的问题,在2010年达到70纳米时也如此。在随后的2004年的规划中,当圆晶片尺寸达到100纳米时,也做了相应的规划。不久半导体工业将发展到那一步。

从1994年的规划中我引用了以上的信息,还有就是一篇提交到IEE的题目为CMOS终结点的论文和在1996年2月8号的Computing上讨论的一些题目。

我现在的想法是,最终的结果是表示一个存在可用的电子数目从数千减少到数百。在这样的情况下,统计波动将成为问题。最后,电路或者不再工作,或者达到了速度的极限。事实上,物理限制将开始让他们感觉到不能突破电子最终的不足,原因是芯片上绝缘层越来越薄,以致量子理论中隧道效应引起了麻烦,导致了渗漏。

相对基础物理学,芯片制造者面对的问题要多出许多,尤其是电路印刷术遇到的困难。2001年更新2002年出版的规划中,陈述了这样一种情况,照目前的发展速度,如果在2005年前在关键技术领域没有取得大的突破的话,半导体业将停止不前。这是对“红色砖墙”最准确的描述。到目前为止是SIA遇到的最麻烦的问题。2003年的规划书强调了这一点,通过在许多地方加上了红色,指示在这些领域仍存在人们没有解决的制造方法问题。

到目前为止,可以很满意的报道,所遇到的问题到及时找到了解决之道。规划书是个非凡的文档,并且它坦白了以上提到的问题,并表示出了无限的信心。主要的见解反映出了这种信心并且有一个大致的期望,通过某种方式,圆晶体将变的更小,也许到45纳米或更小。

然而,花费将以很大的速率增长。也许将成为半导体停滞不前的最终原因。对于逐步增加的花费直到不能满足,这个精确的工业上达到一致意见的平衡点,依赖于经济的整体形势和半导体工业自身的财政状况。

最高级芯片的绝缘层厚度仅有5个原子的大小。除了找到更好的绝缘材料外,我们将寸步难行。对于此,我们没有任何办法。我们也不得不面对芯片的布线问题,线越来越细小了。还有散热问题和原子迁移问题。这些问题是相当基础性的。如果我们

第 6 页 共 13 页

信息与控制工程学院毕业设计(论文)英文翻译

不能制作导线和绝缘层,我们就不能制造一台计算机。不论在CMOS加工工艺上和半导体材料上取得多么大的进步。更别指望有什么新的工艺或材料可以使得半导体集成度每18个月翻一番的美好时光了。

我在上文中说到,圆晶体继续缩小直到45纳米或更小是个大致的期望。在我的头脑中,从某点上来说,我们所知道的继续缩小CMOS是不可行的,但工业上需要超越它。

2001年以来,规划书中有一部分陈述了非传统形式CMOS的新兴研究设备。一些精力旺盛的人和一些投机者的探索无疑给了我们一些有益的途径,并且规划书明确分辨出了这些进步,在那些我们曾经使用的传统CMOS方面。

2.6内存技术的进步

非传统的CMOS变革了存储器技术。直到现在,我们仍然依靠DRAM作为主要的存储体。不幸的是,随着芯片的缩小,只有芯片外围速度上的增长——处理器芯片和它相关的缓存速度每两年增加一倍。这就是存储器代沟并且是人们焦虑的根源。存储技术的一个可能突破是,使用一种非传统的CMOS管,在计算机整体性能上将导致一个很大的进步,将解决大存储器的需求,即缓存不能解决的问题。

也许这个,而不是外围电路达到基本处理器的速度将成为非传统CMOS.的最终角色。

2.7电子的不足

尽管目前为止,电子每表现出明显的不足,然而从长远看来,它最终会不能满足要求。也许这是我们开发非传统CMOS管的原因。在Cavendish实验室里,Haroon Amed已经作了很多有意义的工作,他们想通过一个单独电子或多或少的表现出0和1的区别。然而对于构造实用的计算机设备只取得了一点点进展。也许由于偶然的好运气,数十年后一台基于一个单独电子的计算机也许是可以实现的。

第 7 页 共 13 页

信息与控制工程学院毕业设计(论文)英文翻译

附:英文原文

Progress in Computers Prestige Lecture delivered to IEE, Cambridge, on 5 February 2004 Maurice Wilkes Computer Laboratory University of Cambridge

The first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949. These early experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would stand us in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer. As far as computing circuits were concerned, we found ourselves with an embarass de richess. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term families of logic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant. In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics.dubbed in the IEE light current electrical engineering.properly recognised as an activity in its own right. I remember that we had some difficulty in organising a conference because the power engineers’ ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practice Consolidation in the 1960s

By the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the world had increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the first operating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along. Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the

第 8 页 共 13 页

信息与控制工程学院毕业设计(论文)英文翻译

challenge and that the change could not have gone more smoothly.

Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else. These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organisation, a business or a university was able to have a minicomputer for each major department. Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scale required and at a reasonable cost. Minicomputers transformed the situation. The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry ‘going the other way’. As time goes on people get more for their money, not less.

Research in Computer Hardware.

The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university or elsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scale minicomputer with fancy capability logic.

The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks.

Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring.

The Relentless Drive towards Smaller Transistors

The scale of integration continued to increase. This was achieved by shrinking the original transistors so that more could be put on a chip. Moreover, the laws of physics were on the side of the manufacturers. The transistors also got faster, simply by getting smaller. It was therefore possible to have, at the same time, both high density and high speed.

There was a further advantage. Chips are made on discs of silicon, known as wafers. Each wafer has on it a large number of individual chips, which are processed together and later separated. Since shrinkage makes it possible to get more chips on a wafer, the cost per chip goes down.

Falling unit cost was important to the industry because, if the latest chips are cheaper

第 9 页 共 13 页

信息与控制工程学院毕业设计(论文)英文翻译

to make as well as faster, there is no reason to go on offering the old ones, at least not indefinitely. There can thus be one product for the entire market.

However, detailed cost calculations showed that, in order to maintain this advantage as shrinkage proceeded beyond a certain point, it would be necessary to move to larger wafers. The increase in the size of wafers was no small matter. Originally, wafers were one or two inches in diameter, and by 2000 they were as much as twelve inches. At first, it puzzled me that, when shrinkage presented so many other problems, the industry should make things harder for itself by going to larger wafers. I now see that reducing unit cost was just as important to the industry as increasing the number of transistors on a chip, and that this justified the additional investment in foundries and the increased risk.

The degree of integration is measured by the feature size, which, for a given technology, is best defined as the half the distance between wires in the densest chips made in that technology. At the present time, production of 90 nm chips is still building up The single-chip computer

At each shrinkage the number of chips was reduced and there were fewer wires going from one chip to another. This led to an additional increment in overall speed, since the transmission of signals from one chip to another takes a long time.

Eventually, shrinkage proceeded to the point at which the whole processor except for the caches could be put on one chip. This enabled a workstation to be built that out-performed the fastest minicomputer of the day, and the result was to kill the minicomputer stone dead. As we all know, this had severe consequences for the computer industry and for the people working in it.

From the above time the high density CMOS silicon chip was Cock of the Roost. Shrinkage went on until millions of transistors could be put on a single chip and the speed went up in proportion.

Processor designers began to experiment with new architectural features designed to give extra speed. One very successful experiment concerned methods for predicting the way program branches would go. It was a surprise to me how successful this was. It led to a significant speeding up of program execution and other forms of prediction followed Equally surprising is what it has been found possible to put on a single chip computer by way of advanced features. For example, features that had been developed for the IBM Model 91.the giant computer at the top of the System 360 range.are now to be found on microcomputers

Murphy’s Law remained in a state of suspension. No longer did it make sense to build experimental computers out of chips with a small scale of integration, such as that provided by the 7400 series. People who wanted to do hardware research at the circuit level had no option but to design chips and seek for ways to get them made. For a time, this was possible, if not easy

Unfortunately, there has since been a dramatic increase in the cost of making chips, mainly because of the increased cost of making masks for lithography, a photographic process used in the manufacture of chips. It has, in consequence, again become very difficult to finance the making of research chips, and this is a currently cause for some concern.

The Semiconductor Road Map

第 10 页 共 13 页

信息与控制工程学院毕业设计(论文)英文翻译

The extensive research and development work underlying the above advances has been made possible by a remarkable cooperative effort on the part of the international semiconductor industry. At one time US monopoly laws would probably have made it illegal for US companies to participate in such an effort. However about 1980 significant and far reaching changes took place in the laws. The concept of pre-competitive research was introduced. Companies can now collaborate at the pre-competitive stage and later go on to develop products of their own in the regular competitive manner.

The agent by which the pre-competitive research in the semi-conductor industry is managed is known as the Semiconductor Industry Association (SIA). This has been active as a US organisation since 1992 and it became international in 1998. Membership is open to any organisation that can contribute to the research effort.

Every two years SIA produces a new version of a document known as the International Technological Roadmap for Semiconductors (ITRS), with an update in the intermediate years. The first volume bearing the title ‘Roadmap’ was issued in 1994 but two reports, written in 1992 and distributed in 1993, are regarded as the true beginning of the series.

Successive roadmaps aim at providing the best available industrial consensus on the way that the industry should move forward. They set out in great detail.over a 15 year horizon. the targets that must be achieved if the number of components on a chip is to be doubled every eighteen months.that is, if Moore’s law is to be maintained.-and if the cost per chip is to fall. In the case of some items, the way ahead is clear. In others, manufacturing problems are foreseen and solutions to them are known, although not yet fully worked out; these areas are coloured yellow in the tables. Areas for which problems are foreseen, but for which no manufacturable solutions are known, are coloured red. Red areas are referred to as Red Brick Walls. The targets set out in the Roadmaps have proved realistic as well as challenging, and the progress of the industry as a whole has followed the Roadmaps closely. This is a remarkable achievement and it may be said that the merits of cooperation and competition have been combined in an admirable manner. It is to be noted that the major strategic decisions affecting the progress of the industry have been taken at the pre-competitive level in relative openness, rather than behind closed doors. These include the progression to larger wafers.

By 1995, I had begun to wonder exactly what would happen when the inevitable point was reached at which it became impossible to make transistors any smaller. My enquiries led me to visit ARPA headquarters in Washington DC, where I was given a copy of the recently produced Roadmap for 1994. This made it plain that serious problems would arise when a feature size of 100 nm was reached, an event projected to happen in 2007, with 70 nm following in 2010. The year for which the coming of 100 nm (or rather 90 nm) was projected was in later Roadmaps moved forward to 2004 and in the event the industry got there a little sooner.

I presented the above information from the 1994 Roadmap, along with such other information that I could obtain, in a lecture to the IEE in London, entitled The CMOS

第 11 页 共 13 页

信息与控制工程学院毕业设计(论文)英文翻译

end-point and related topics in Computing and delivered on 8 February 1996. The idea that I then had was that the end would be a direct consequence of the number of electrons available to represent a one being reduced from thousands to a few hundred. At this point statistical fluctuations would become troublesome, and thereafter the circuits would either fail to work, or if they did work would not be any faster. In fact the physical limitations that are now beginning to make themselves felt do not arise through shortage of electrons, but because the insulating layers on the chip have become so thin that leakage due to quantum mechanical tunnelling has become troublesome.

There are many problems facing the chip manufacturer other than those that arise from fundamental physics, especially problems with lithography. In an update to the 2001 Roadmap published in 2002, it was stated that the continuation of progress at present rate will be at risk as we approach 2005 when the roadmap projects that progress will stall without research break-throughs in most technical areas “. This was the most specific statement about the Red Brick Wall, that had so far come from the SIA and it was a strong one. The 2003 Roadmap reinforces this statement by showing many areas marked red, indicating the existence of problems for which no manufacturable solutions are known.

It is satisfactory to report that, so far, timely solutions have been found to all the problems encountered. The Roadmap is a remarkable document and, for all its frankness about the problems looming above, it radiates immense confidence. Prevailing opinion reflects that confidence and there is a general expectation that, by one means or another, shrinkage will continue, perhaps down to 45 nm or even less.

However, costs will rise steeply and at an increasing rate. It is cost that will ultimately be seen as the reason for calling a halt. The exact point at which an industrial consensus is reached that the escalating costs can no longer be met will depend on the general economic climate as well as on the financial strength of the semiconductor industry itself.。

Insulating layers in the most advanced chips are now approaching a thickness equal to that of 5 atoms. Beyond finding better insulating materials, and that cannot take us very far, there is nothing we can do about this. We may also expect to face problems with on-chip wiring as wire cross sections get smaller. These will concern heat dissipation and atom migration. The above problems are very fundamental. If we cannot make wires and insulators, we cannot make a computer, whatever improvements there may be in the CMOS process or improvements in semiconductor materials. It is no good hoping that some new process or material might restart the merry-go-round of the density of transistors doubling every eighteen months.

I said above that there is a general expectation that shrinkage would continue by one means or another to 45 nm or even less. What I had in mind was that at some point further scaling of CMOS as we know it will become impracticable, and the industry will need to look beyond it.

Since 2001 the Roadmap has had a section entitled emerging research devices on non-conventional forms of CMOS and the like. Vigorous and opportunist exploitation of these possibilities will undoubtedly take us a useful way further along the road, but the Roadmap rightly distinguishes such progress from the traditional scaling of conventional CMOS that we have been used to. Advances in Memory Technology

第 12 页 共 13 页

信息与控制工程学院毕业设计(论文)英文翻译

Unconventional CMOS could revolutionalize memory technology. Up to now, we have relied on DRAMs for main memory. Unfortunately, these are only increasing in speed marginally as shrinkage continues, whereas processor chips and their associated cache memory continue to double in speed every two years. The result is a growing gap in speed between the processor and the main memory. This is the memory gap and is a current source of anxiety. A breakthrough in memory technology, possibly using some form of unconventional CMOS, could lead to a major advance in overall performance on problems with large memory requirements, that is, problems which fail to fit into the cache. Perhaps this, rather than attaining marginally higher basis processor speed will be the ultimate role for non-conventional CMOS. Shortage of Electrons

Although shortage of electrons has not so far appeared as an obvious limitation, in the long term it may become so. Perhaps this is where the exploitation of non-conventional CMOS will lead us. However, some interesting work has been done.notably by Haroon Amed and his team working in the Cavendish Laboratory.on the direct development of structures in which a single electron more or less makes the difference between a zero and a one. However very little progress has been made towards practical devices that could lead to the construction of a computer. Even with exceptionally good luck, many tens of years must inevitably elapse before a working computer based on single electron effects can be contemplated.

文章来源:IEEE的论文 剑桥大学,2004/2/5

第 13 页 共 13 页

第五篇:英文翻译

微软的Visual Studio产品

产品支持

包括产品

这一部分需要额外的引文进行核查。请协助添加引用可靠的消息来源改善这篇文章。今天的材料可能面临挑战和删除。(2008)

微软的Visual C + +

微软的Visual C + +是微软的执行C和C + +编译器和相关语言服务和专用工具集成在Visual Studio IDE。它可以编译在C模式下或C + +模式。对于C,它遵循的ISOÇ标准部分C99规格以及MS,具体的增加在库的形式。[41]对于Ç+ +,它遵循的ANSIÇ+ +规范以及几个Ç+ +0所述的功能。[42它也支持C + + / CLI规范编写托管代码,以及代码的本机代码和托管代码(混合)混合模式。微软位置的Visual C + +本机代码或者代码,其中包含本地以及管理组件的发展。 VISUAL C + +支持COM以及MFC库中。对于MFC开发的,它提供了一组向导,用于创建和自定义MFC样板代码,创建GUI应用程序中使用MFC。 Visual C + +中也可以使用Visual Studio窗体设计器的图形化设计UI。 Visual C + +中也可以使用的Windows API。它也支持使用内部函数,[43]这是公认的编译器本身,而不是作为一个库实现的功能。用于公开的现代CPU的SSE指令集的内在函数。的Visual C + +还包括了OpenMP(版本2.0)规范。[44]

微软的Visual C #

微软的Visual C#,微软的C#语言的实现,目标的。NET Framework的语言服务,让在Visual Studio IDE支持C#项目。尽管语言服务是一个Visual Studio的一部分,编译器是单独提供的。NET框架的一部分。Visual C#2008和2010编译器支持版本3.0和4.0的C#语言规范。Visual C++ #支持Visual Studio类设计器,窗体设计器,和其他数据的设计师。[ 45 ]

微软Visual Basic

微软Visual Basic是微软实施的VB.NET语言和相关工具和语言服务。据介绍,使用Visual Studio。NET(2002)。微软已定位的Visual Basic应用程序的快速开发。[ 46 ] [ 47 ] Visual Basic可以用来作者控制台应用程序以及GUI应用程序。像Visual C#,Visual Basic也支持Visual Studio类设计器,窗体设计器,数据在其他的设计师。如C#,VB.NET编译器也可作为一部分的。NET框架,但语言服务,让vb.net项目与Visual Studio开发,可作为后者的一部分。

微软的Visual Web Developer

微软的Visual Web Developer来创建Web站点,Web应用程序和Web服务

使用ASP.NET。可以使用C#或VB.NET语言。 Visual Web Developer中可以使用Visual Studio的Web设计人员以图形化的网页设计布局。

TeamFoundationServer

仅包括与VisualStudioTeamSystem,TeamFoundationServer的目的是协作软件开发项目作为服务器端后台提供源代码管理,数据收集,报告,和项目跟踪功能。它也包括团队资源管理器,客户端工具,这是集成在Visual Studio Team System中的TFS服务。

Editions

微软的Visual Studio在以下版本或型号可用:[ 53 ]

ExpressVisual Studio Express版的Visual Studio是一套免费的个人的IDE提供精简版的Visual Studio IDE上的每个平台的基础上或每个语言的基础上,也就是说,它的开发工具支持的平台安装(网页,Windows,电话)或到单个Visual Studio Shell的AppIds支持的开发语言(VB,C#)。它包括只有一小部分的工具相比其他系统。它不包括支持插件。64位编译器不包含在Visual Studio Express版本的IDE,但可作为可以单独安装的Windows软件开发工具包的一部分。最初的公告[ 55 ],表示2012的发布将仅限于创建Windows 8 Metro后(设计语言)风格的应用程序,微软回应扭转这一决定并宣布该桌面应用程序的开发也将支持开发者的反馈。[56]针对Microsoft的快速集成开发环境,学生和业余爱好者。 Express版本不使用完整的MSDN库,但使用MSDN Essentials库。可快速集成开发环境的一部分的语言是:[57]

Visual Basic Express

Visual C++ Express

Visual C# Express

Visual Web Developer Express

Express for Windows Phone

Visual Studio LightSwitch

微软的Visual Studio LightSwitch的是一个专门为创建企业现有的。NET技术和微软平台上构建的应用程序的IDE。应用程序产生的建筑3层:用户界面运行在微软的Silverlight的逻辑和数据访问层是建立在WCF RIA Services和实体框架,在ASP.NET托管和主数据存储支持微软的SQL Server Express ,Microsoft SQL Server和微软SQL Azure。 LightSwitch中也支持其他的数据源,包括Microsoft SharePoint。 LightSwitch中包括实体和实体之间的关系,实体查询和用户界面屏幕设计的图形设计。业务逻辑可以在Visual Basic或

Visual C#编写的。该工具可以安装一个独立的SKU或添加到Visual Studio 2010专业版和更高的集成。[58]

Visual Studio专业版

Visual Studio专业版提供了一个IDE为所有支持的开发语言。标准版的Visual Studio 2010, [59] MSDN支持MSDN Essentials或完整的MSDN库,根据许可的。它支持XML和XSLT编辑,并可以创建部署包仅使用ClickOnce和MSI。它包括服务器资源管理器和集成工具,如Microsoft SQL Server也。然而,包含在Visual Studio 2005标准版,支持Windows Mobile的开发与Visual Studio 2008中,它是唯一可在专业版和更高版本。 Windows Phone 7开发的支持,增加在Visual Studio 2010的所有版本。不再支持Visual Studio 2010中的Windows Mobile的发展,它是取代的Windows Phone 7。

Visual Studio Premium

Visual Studio高级版包括在VisualStudio专业的工具和增加了额外的功能,如代码度量,分析,静态代码分析,和数据库单元测试。

Visual Studio Tools for Office

VisualStudioToolsforOffice是一个SDK和添加在Visual Studio,包括为微软办公套件的开发工具。

在此之前(为Visual Studio。NET2003和Visual Studio2005),这是只支持Visual C#和Visual Basic语言,被列入团队独立库中。在Visual Studio2008中,它不再是一个单独的库,但包括专业版和更高版本。部署VSTO解决方案时,需要一个单独的运行。

本文来自 99学术网(www.99xueshu.com),转载请保留网址和出处

上一篇:windows7安装维护教案下一篇:dreamweaver教学计划