Programming Language for Problem Domain Science: ALPHA

Name of New Science Programming Language

 ALPHA is the name of the new programming Language for Problem Domain of Science. It attempts to use the best of the programming advantages of object-oriented language and dynamic languages. It is expected to replace static languages and ideal for large scale scientific software system development.

Introduction and explanation of language purpose

As ALPHA is a dynamic programming language, and includes the best of the programming advantages of object-oriented language as well as dynamic. It retains several of the features of C++ as well as LISP. Therefore, its main advantage is in building component software aimed for safe applications which require compactness. The goal of this language is to allow for fast paced programming and development. It also is robust as it will allow constant refinement in terms of prototypes.

Programming languages increase capacity to express ideas, in terms of exploring new language opportunities, building language features, encourage their use, improve background for language selection, increase ability to learn new languages, understand implementation and its significance and more importantly learn to design new languages. Thus, every computer application for different domains – business, artificial intelligence, scientific applications or systems programming—use different programming languages to achieve different goals.  A domain-specific language (DSL) is a computer language which is specialized and written for a particular application domain. These are different from General Purpose Language (GPL) which is applied to multiple domains.

Every problem domain identified offers an opportunity for programmers, language designers to explore and innovate. In the problem domain of science, applications need simple data structures, arrays and floating point operations. It also requires control structures for counting loops as well as selections. It also involves matrix manipulation that needs fast floating point calculations and also has the capability to load much data into memory in-line with input and output operations. Among the existing languages, such as Java, the limitation is in terms of performance as well as memory use. It does offer the best solution for manipulation of large matrices. Similarly, C and C++ despite being Object Oriented programming languages also have their more limitations.

A programming language is evaluated for different criteria – readability, writability and Reliability. Therefore, when designing a language there are other influential factors as well – computer architecture, programming methodologies. Languages can be created in different categories – imperative ( using Von Neumann architecture) or kept functional (mathematical functions), be Logical or rule based and Object Oriented or Object based.

There are typical design trade-offs with respect to designing language. When it is designed for reliability, it is likely to execute slowly at the cost of time, while a language which is readable is not easy to write. Similarly if the program is flexible it is typically not as safe to use and if its safe, then it is not likely to very flexible.

Implementing languages involves three methods – compiler, pure interpretation or hybrid. The programming environment therefore includes a collection of tools – file system, compiler, editor or a linker. The domain specific language requires that it is designed at three levels – Lexical, Syntactic and Semantic.

Syntax

The literals in use include Boolean # true, #false, Numbers include 32// for a decimal integer, #b1010 // for  binary integer #02 // for octal; -34.5// for floating point number, #x3B// for hexadecimal. Strings “x” for character; “ welcome” for simple string “welcome/m” for escape. Symbols in use are #”welcome” for a symbol, #”welcome” for the same symbol and Welcome:/ for keyword syntax. The collections are #(1.”one”_ for literal pair), #1,2,3, ( for list literal) #[1,2,3] for vector.

Naming conventions

Classes are named using brackets, constants are used with $, module variable use * symbols and local variable use “loc”. Predicate functions give back # true or #false, and end in ? ‘

Operators: equality, comparison, arithmetic, collections, sequence, bitwise as well as logic.

String Formatting

Format ( sequence, %s: %d”, port) here the directive  %sis object  and is described as ‘unique’ format.

The Semantics include direct pointer access, dynamic language, adapt an existing garbage collector, and to handle concurrency, using actor model given that multiple cores are the trend. The major concepts in use are objects as well as functions. It is designed to implement both of them but will have certain omission (Ploger, 1991) . Here the features are useful for writing programs since the extensive library allows wider control flow and data types as well as numbers. The assumption is to consider all values and data as ‘objects.’ They are designed as ‘classes.’ And the ‘instances’ are with respect to atleast one of the classes. The arrangement of the classes is such that there is an increasing or hierarchy involved. This provides scope for much of the acyclic features. The inheritance features of the classes are also in a class which is at a higher level. The first level of this hierarch is that of the object or the class.

The line of thinking in this programming language is to use the class as central to the structure. This involves certain types of work or specifications. Such grooves or levels ensure that a state is retained by a class. Here the idea is to ensure that there is a logical anchoring or association of a calls with an instance or what is also called as an event. For every one class  there is an event or instance. The organization of these events is done in a hierarchy or logical sequencing where one is higher than the next. This type of class segmentation allows for a acyclic instances and event resolutions. Each of these levels or grooves are then given a particular name. Hence, every level is identified uniquely. This identification is also associated with particular functions. These functions are associated with variables which fetch and those which deliver. Therefore, reading and writing of names in the desired action sequences becomes easy. The functions which are part of ALPHA are in terms of objects. The actions these functions perform are related to several characteristics. These include certain procedures, methods and message to languages that are separate from original language or native language. The association is therefore with two types of functions – these are methods as well as generic functions. There are any number of associations between the two sets of working functions. Therefore, the principle used here is to find out which functions are for which method. The argument list become crucial to this classification as well as the codes. The methods here include all of the formal parameters and are required to develop an argument. However, if they are not of the same class, they have to be defined differently. This could include the use of subclasses as well as other defined parameters. When functions are not used as per their arguments then the possibility of error situations too have to be handled. The priority here is to ensure that there is no mismatch between argument type and function. Therefore, a strong feature for type checking is introduced to capture such errors. This is the design which will ensure that static analysis is possible even when it is in run time. The functions which are generic in nature are then grouped differently. Here the issue is to ensure that there is more than the zero method used. The different types of functions are known to perform same functions, when they use same name. Hence it is very important to examine the generic nature to either form the same class or to redefine the parameter. The design feature currently is to examine the type of argument and also to ensure that the method chosen to invoke is appropriate. When, there is wrong entry, the appropriate method should be used. Until then a negative or error message continues. This method of designing ensures that faulty generic functions are captured early. Such generics are known to create or modify during run time and ensures that dynamic type checking will allow complete running of the program.  The generic functions therefore have to be defined in such as way that they are not over defined. Thus, the polymorphic capabilities are to be used in relation to generic functioning.

To ensure that there is no ambiguity between descriptions of classes, generic functions a well as methods, relationships are formalized. The abstract syntax, description as well as basic operations with respect to functions and classes become necessary. The idea is to ensure validity functions are included for easier static analysis. In terms of defining classes it has to be said that the acyclic graph is persued. The representations in terms of class, classname as well as class are to be used appropriately. The subclasses which includes a list of classes are in direct relation to inherent properties with relation to classname. The operations which are in basic use are related to generic functions. The use of the over defined objective is very critical to the working of the language here. The idea is to examine the type of the parameter before it calls or invokes the related or associated method (Martin-Lof, 1982). The type of method is again associated with that of the type in use. Thus all of the generic functions work as a list and each element of the list is prioritized as a parameter.

The pointer to corresponding methods is also used for the parameter lists. The generic functions which are in use at the top-level function are related to the generic functions. The operations as well as the generic functions which are associated with the new types of generic functions are also ideal for removing generic functions (Guzdial,1994).. The idea is to add more method to the current or existing generic functions. These can also be removed by various methods using a variety of generic functions as well as application of generic function in terms of arguments. The use of method is one of the basic functional units used. It is considered for collecting a list of typed parameters and it also returns the typed value. The methods which are defined are of no significance to the users. The methods used for definition are automatically related to the generic function without the same name. The new methods used are attached to the new generic functions in use. This method s is also useful in defining unique identifiers which are related to ‘key’ making. The core idea is of the new key which is in equivalence to the expression which introduces it to a given namespace. The parameter lists, as well as the keys related provide the necessary functions. However, it has to be noted that these factors are generic. The new function which is required is with relation to the degree to which the generic function is of use. The creation of a generic new function is also to be identified. The new method involves the alteration of the generic function and has to be supported by the function space as well as the scheme environment.

For example to calculate the cube root of a number by the use of Newton method;

(name-method Newtons-cube (x)

(trim-methods ((cube1(guess)

(if (close? Guess)

Guess

(cube1(improve guess))))

(close? (guess)

( < abs)- (*guess guess) x )) 0.001))

(improve (guess)

(/ + guess( / X guess)) 3)))

(sqrt1 1)))

 

 

 

Choice and justification of interpretation/compilation method(s) to be used

As computers are able to execute only machine code, ALPHA programming language I have developed has to be converted to machine code before it is suitable for running in any other language. I propose to use the translator X, written for language X and will therefore have a Virtual X Machine. The options I have general translation are – (1) Compilation (2) Interpretation (3) Hybrid translation which uses both compilation and interpretation. With a hybrid model, ALPHA is easy to use as it reads source program every time it is run. However it detects ‘static errors’ during compilation and when source is changed, it interprets again. By using the hybrid system, it gives high flexibility and programming reruns, though optimization is retarded.

In using the compiler it is easy to translate from one language, that is the source language to another or the target language, via the following stages – scanning, parsing, semantic analyzing, intermediate code generation, optimizer and code generation, to arrive at target program.

Discussion of memory management and scoping features

ALPHA is designed for automatic memory management. It will allow allocation function with a simple direction of ‘memmake.’ It will also allow implementation in terms of finalization as well as hash tables. These will be interfaced for features which are yet in the process of standardization. Finalization is attempted by the use of functions finalwhennotreach. This will allow a call to finalization function and coordinate with the garbage collector. For this to statrt the object is first declared not reaching. Further, the hash tables which are weak many allow other keys and values which are weak, but are all subject to parameter which are provided at the time of start and allocation (Spohrer, 1989). As soon as the garbage collector determines the strong references in terms of value, keys, the weak and value tables are handled. The entries are deleted and there is no strong reference remaining.

Specification and rationale for major language features in terms of:

Simplicity: Uses fixed number of keywords, control structures and types. There is use of pointers over references, instead of increments as in C.

Orthogonality: Features are independent, data of all types can be passed by value as well as reference, data can be passed to and returned from functions.

Data types – complex for easy writing

Syntax design – yes

Support for abstraction – yes

Expressivity – complex

Type checking – strong

Exception handling – present

Restricted aliasing – Present

Discussion of the readability, writability and reliability of the language based on the language characteristics as chosen

Reliability and writability are a trade-off. However, the language supports writing reliable programs.  Therefore a characteristic complexity is involved in the language as more data types are used for easier writing, and introduction of exception handling though increasing readability becomes complex.

The idea of designing a programming language is to ensure multiple objectives. There has o be sophistication in terms of data types, since it is meant for scientific domain. There has to be advanced specifications (Peyton Jones, 1987). Secondly, errors during programming have to be captured at the earliest. At the same time the idea is to ensure that it runs error-free and is safe to use. That is to say, the vulnerabilities are the least and that there is efficient optimization. This is possible in theory only when the compilation strategies are efficient. In a modern day language, the idea is to have strong libraries, almost matching that of an assembly language. It involves the use of several sets of instructions which are then useful in ensuring higher levels of problem resolution. Though the use of assembly language is considered a very useful way in which to handle scientific domain software, the use of dynamic elements is also critical. The advantages with assembly language levels is that it optimizes use of substrate levels but has limitations as it involves difficulties related to day-to-day executions. Therefore, by adding the finer layers of more advanced object oriented programs, it is therefore comfortable to bring in the additional use factor and user experience. Also capturing errors become easy by use of controls and strong type control factors.

One of the major issues with language programming is that it is lately dominated by what is called as network effects (Owen, N. W., Kent, M., & Dale, 2004).. The costs of using minority languages are often exploitative. The use of language for a minority or limited projects is exorbitant in terms of costs; as well as the benefits they have to offer in the long term. Therefore, it is recommended that majority language is used for most occasions. However, in terms of problem domains such as science, minority languages too hold powerful use case. It is useful especially with respect to per project case-level, recommend experts. The use of General programming language compilers will offer wider scope for such language projects. Therefore, the concentration in terms of using new language is on developing a good language and promote it  among other programmers, since the productivity, cost and quality are worthy of their use. Thus, generic solutions are supportive in the background while, advanced languages such as ALPHA allow higher productivity and bring further importance to the domain of science.

References

Berry, G., & Gonthier, G. (1992). The Esterel synchronous programming language: Design, semantics, implementation. Science of computer programming19(2), 87-152.

Guzdial, M. (1994). Software‐realized scaffolding to facilitate programming for science learning. Interactive Learning Environments4(1), 001-044.

Hoare, C. A. R. (1969). An axiomatic basis for computer programming. Communications of the ACM12(10), 576-580.

Knuth, D. E. (1992). Literate programming. CSLI Lecture Notes, Stanford, CA: Center for the Study of Language and Information (CSLI), 19921.

Maloney, J., Resnick, M., Rusk, N., Silverman, B., & Eastmond, E. (2010). The scratch programming language and environment. ACM Transactions on Computing Education (TOCE)10(4), 16.

Martin-Löf, P. (1982). Constructive mathematics and computer programming. Studies in Logic and the Foundations of Mathematics104, 153-175.

Owen, N. W., Kent, M., & Dale, M. P. (2004). Plant species and community responses to sand burial on the machair of the Outer Hebrides, Scotland. Journal of Vegetation Science15(5), 669-678.

Peyton Jones, S. L. (1987). The implementation of functional programming languages (prentice-hall international series in computer science). Prentice-Hall, Inc..

Ploger, D. (1991). Learning about the genetic code via programming: Representing the process of translation. The Journal of Mathematical Behavior.

Smith, D. C., Cypher, A., & Spohrer, J. (1994). KidSim: programming agents without a programming language. Communications of the ACM37(7), 54-67.

Spohrer, J. C. (1989). Marcel: a generate-test-and-debug (gtd) impasse/repair model of student programmers.

Steele Jr, G. L. (1980). The definition and implementation of a computer programming language based on constraints.

Social Media Programming Especially Focusing on Google,Twitter and Facebook

Abstract

Our contemporary society calls the 21st century – “the century of data” and they are completely correct in making that observation. The present proposal goes beyond the revelations of the 21st century. It is proof of how far the spread of data can go.

Web-based social networking has changed how programming engineers team up, how they facilitate their work, and where they discover data. Online networking locales, for example, the Question and Reply (questions and answers) entry Stack Flood, re-write the files with many sections that add to what we think about programming advancements, covering an extensive variety of subjects. For now, product engineers, reusable code scraps, basic use cases, and appropriate libraries are regularly only a web seek away.

In this position article, we talk about the opportunities and difficulties for programming designers that depend on web content curated by the group, and we imagine the fate of an industry where singular engineers benefit from and add to a collection of learning, kept up by the group utilizing web-based social networking.

While the Web has provided a limitless source for learning, which is available to programming engineers, it is the progress of online networking that has presented powerful components for an extensive horde of designers to minister content on the web.

Clients can support articles through features like, for example, Facebook “Likes”, and they can give positive or negative appraisals to inquiries and replies on questions and answers sites, while also being able to comment on and remark on a wide range of blog entries. Content via web-based networking media locales, ranges from instructional exercises, and experience reports in order to code pieces and tests.

The phenomenon that is discussed later is namely social media programming especially focusing on Google, Twitter and Facebook. In order to derive better results we selected particular social media platforms – Google, Twitter and Facebook.

During further research, we have conducted surveys on social media programming. Also, tutorials have been done on Facebook programming and twitter programming with the help of examples.

A final conclusion has been made by comparing these two platforms from the point of view of the programmer.

Understanding Big Data – In the Context of Internet of Things Data

Executive Summary

This Knowing Internet of Things Data: A Technology Review is a critical review of Internet of Things in the context of Big Data as a technology solution for business needs. It evaluates the potential exploitation of big data and its management in correlation to devices which are Internet of Things. The article begins with literature review of internet of things data, thereby defining it in academic context. It then analyzes big data usability in commercial or business economics context. It discusses and evaluates the application of Internet of Things Data which ensures there is value-addition to a Business. The main objective of this Knowing Internet of Things Data: A Technology Review is to communicate the business sense or the business intelligence in use of big data by an organization. The premise of paper, is that, Internet of things data is an emerging science with infinite solutions for organizations to exploit and build services, products or bridge ‘gaps’ in delivery of technology solutions. The possibilities of using big data for marketing, healthcare, personal safety, education and many other economic-technology solutions are discussed.

Introduction

Organizational decisions are increasingly being made from data generated by Internet of Things (IoT), apart from traditional inputs.  IoT data is empowering organizations to manage assets, enhance and strengthen performances and build new business models. According to MacGillivray, C., Turner, V., & Lund, D. (2013) the number of IoT installations is expected to be more than 212 billion devices by 2020. Thus, management of data becomes a crucial aspect of IoT, since different types of objects interconnect and constantly interchange different types of information. The scale or volume of data generated and the processes in handling data are critical to IoT and requires the use several technologies and factors.

Addressable market area globally for IoT is estimated to be $1.3 trillion by 2019. New business opportunities are thus plenty, allowing organizations to become smarter and enhance their product, services and improve user/customer experience, thereby creating Quantified Economy. According to Angeles et al ( 2016) (1) Internet of Things spending is $669(2) smart homes connectivity spend $174 million (3) Connected cars by 2020 spend $220 million.  The has led to companies revisiting their decisions (1) Are services or products of their organization capable to connect or transmit data (2) Are the organizations able to optimize value from the data they have (3) Are the connected devices at the organization able to provide end-to-end-view (4) Do organizations need to build IoT infrastructure or just parts of a solution to connect devices.  Some examples of IoT and business value – (a) real estate holding company adopts smart buildings networking for ‘real-time’ power management and save substantially on expenses incurred in this sector (2) incorporating sensors in vehicles allows logistics companies to gain real-time input on environmental, behavioural factors that determine performance (3) Mining companies can monitor quality of air for safety measures and protecting miners.

Hence, the immediate results of IoT data are tangible and relate to various organizational fronts – optimize performance, lower risks, increase efficiencies. IoT data becomes the vital bridge for organizations to gain insight and strengthen core business, improve safety and leverage data for business intelligence, without having to become a data company itself. Organizations can continue to focus on their deliverables instead of the backend of generating value from data, by using several IoT data management, storage technologies offered by vendors competitively.

Algorithm Marketplaces

As big data enters the ‘industrial revolution’ stage, where machines based on social networks, sensor networks, ecommerce, web logs, call detail records, surveillance, genomics, internet text or documents generate data faster than people and grow exponentially with Moore’s Law, share analytic vendors.  Therefore, virtual marketplaces where algorithms (code snippets) are purchased or sold is expected to commonplace by 2020. Gartner expects three vendors to dominate the market place and are all set to transform the software market of today, with analytics domination. Simply said, algorithm marketplace improves on the current app economy and are entire ‘’building blocks” which can be tailored to match end-point needs of the organization. (1) Granular software will be sold in more quantities, since software for just a function or a feature will be available at cheap prices. (2) Access to powerful, advanced, cutting-edge algorithms by inventors who earlier restricted their products in-house are now commercially made available, widening application scope and benefitting businesses. (3) reuse or recycling of algorithms is now optimized. (4) quality assessment optimized.

Model Factory

Data storage is cheap and hence can be mined for information generation.  Technologies such as MPP (massively parallel processing) databases, distributed databases, cloud computing platforms, distributed file system, as well as scalable storage systems are in use.  Using open source platforms such as Hadoop the data lake built can be developed to predict analytics by adopting a modelling factory principle. In this technology, of which there are several vendors, the data that an organization generates does not have to handled by data scientist but focus on asking right questions with relation to predictive models. The technology allows real automation to data science, where traditionally work was moved from one tool to the next, so that different data sets were generated and validated by models. The automation of such processing not only removes human error but also allows managing hundreds of models in real time. In model factories of the future, software will pre-manage data and scientists have to concentrate only on how to run models and not iterate their work. Model factories of the future are the Google and Facebook of today, but without the number crunching army of engineers but automated software to manage data science processing via tooling and pervasiveness of machine learning technologies. Examples include Skytree.

Edge analytics

Business environment creates unstructured databases which could exceed zettabytes and petabytes and demand specific treatment in terms of storage of processing and display.  Hence, large data-crunching companies such as Facebook or Google cannot use conventional database analytic tools such as those offered by Oracle as big repositories require agile, robust platforms based on either distributed, cloud systems or open source systems such as Hadoop. These involve the use of massive data repositories and thousands of nodes which evolved from tools developed by Google Inc, like the MapReduce or File Systems or NoSQL. None of these are compliant with conventional database characteristics such as – atomicity, isolation, durability or consistency.  Hence to overcome the challenge data scientists collect data, analyze it by using automated analytic computation on data at a sensor or the network switch or other device and does require that data is returned to data store for processing. Thus, by annotating and interpreting data, network resources mining of data acquired is possible.

Anomaly detection

Data Scientists use the outlier detection or anomaly detection process to identify instances or events which fall short of a template pattern of an item on a data set. In short, they are the set of data points which are different in many ways from the remainder of the data. These are used in credit card frauds, fault detection, telecommunication frauds, network intrusion detection. This also used statistical tools such as Grubbs’ test to detect outliers or univariate data (Tan, P. N., Steinbach, M., & Kumar, 2013).

Event streaming processing

ESP or Event Stream Processing is described as the set of technologies which are designed to aid the construction of an information system that are event-based. Thus, this technology include – event visualization, event databases, event driven middleware, event processing languages as well as complex event processing. Here data that is collected is immediately processed without a waiting period, and creates output instantaneously.

Text analytics

Text analytics refers to text data mining and uses text as the units for information generation and analysis. The quality of information derived from texts is optimal as patterns are devised and trends are used in the form of statistical pattern leaning.  Unstructured text data is processed to form meaningful data for analysis so that customer opinions, feedback, product reviews are quantified. Some of the applications here are sentimental analysis, entity modelling support for decision making.

Data lakes

Data lakes are storage repositories of raw data in its native format. These are held in this state, until they are required. Such storage is done in a flat architectural format and contrasts with that ot data stored hierarchically in data warehouse stores. Data structures are defined only when the data is needed. Vendors include Microsoft Azure, apart from several open source options.

Spark

Spark is a key application of IOT data which simplifies real-time big data integration for advanced analytics and uses realtime cases for driving business innovation. Such platforms generate native code and needs to be further processed for Spark streaming.

Conclusion

According to Gartner as many as 43% of organizations are committed to invest and implement IoT, and is indicative of the massive scale of data the organizations will come to generate. Thus, utilities or fleet management or healthcare organizations, the use of IoT data will overturn their cost savings, operational infrastructure as well as asset utilization, apart from safety and risk mitigation and efficiency building capabilities. The right technologies deliver on the  promise of big data analytics of IoT data repositories.

References

Angeles, R. (2016). STEADYSERV BEER: IOT-ENABLED PRODUCT MONITORING USING RFID. IADIS International Journal on Computer Science & Information Systems11(2).

Chen, H., Chiang, R. H., & Storey, V. C. (2012). Business intelligence and analytics: From big data to big impact. MIS quarterly36(4), 1165-1188.

Fredriksson, C. (2015, November). Knowledge management with Big Data Creating new possibilities for organizations. In The XXIVth Nordic Local Government Research Conference (NORKOM).

MacGillivray, C., Turner, V., & Lund, D. (2013). Worldwide Internet of Things (IoT) 2013–2020 Forecast: Billions of Things. Trillions of Dollars, Gartnet Market Analysis.

Tan, P. N., Steinbach, M., & Kumar, V. (2013). Data mining cluster analysis: basic concepts and algorithms. Introduction to data mining.

Troester, M. (2012). Big data meets big data analytics: Three key technologies for extracting real-time business value from the big data that threatens to overwhelm traditional computing architectures. SAS Institute. SAS Institute Inc. White Paper.

Assignment Writing Services India

Completing assignments is no more a hassle for the students

Assignments are always a trouble for the students. Hence, students seek for urgent help as the assignments determine their pass or fail in the exams in most of the cases. Assignment Writing in India is the ultimate Assignment Writing Services India for all the students.

The specialized services that we provide to our students

We provide 100% plagiarism free content according to the prerequisite of the students. We deliver completed assignments within the given deadline and stipulated time. we eagerly wait for your feedback and try to incorporate them in our services.

We have experienced writers providing us with excellent written assignments round the clock. You can contact us anytime for your help. We are glad to provide you with quality work. Our subject experts deal with the assignment with utmost care and write them with extreme expertise.

We can help you with assignments on any subject. We also accept urgent work at very less expense. The work done is of global standard and ensures you to score excellent grades in the exam. We write your assignment according to your university requirements.

We provide help at very less expense as we understand the plight of the students. We proof read the writing after completing it. Students can easily track the progress of their work on our website. We have the facility of money back offer if our writing is dis satisfactory and does not yield good results.

We can even amend your work to the ultimate time until you are satisfied with it. We are just a click away and you can connect with us anytime you want. You can even call us anytime for your help. We are proud to announce that we have satisfied customers all over the globe.

We are the best and you can seek our help for any assignment help you want.