Thursday, February 11, 2016

PHP 5.6 on CentOS 5.9 with Apache Mod FCGI

PHP5.6 on CentOS 5.9 with Apache Mod FCGI


mkdir /opt/php-5.6.18
mkdir -p /usr/local/src/php-5.6.18
cd /usr/local/src/php-5.6.18
wget -O php-5.6.18.tar.bz2
tar jxf php-5.6.0.tar.bz2

yum install bzip2-devel curl-devel libjpeg-devel libpng-devel freetype-devel libc-client-devel.i686 libc-client-devel libmcrypt-devel -y
yum install libxml2-devel bzip2-devel libcurl-devel libpng-devel db4-devel postgresql-devel sqlite-devel aspell-devel net-snmp-devel libxslt-devel libxml2-devel pcre-devel t1lib-devel.x86_64 libmcrypt-devel.x86_64 libtidy libtidy-devel curl-devel libjpeg-devel libvpx-devel freetype-devel libc-client-devel -y

./configure --enable-bcmath --with-bz2 --enable-calendar --with-curl --enable-exif --enable-ftp --with-gd --with-jpeg-dir --with-png-dir --with-freetype-dir --enable-gd-native-ttf --with-imap --with-imap-ssl --with-kerberos --enable-mbstring --with-mcrypt --with-mhash --with-mysql --with-mysqli --enable-mysqlnd --with-openssl --with-pcre-regex --with-pdo-mysql --with-zlib-dir --with-regex --enable-sysvsem --enable-sysvshm --enable-sysvmsg --enable-soap --enable-sockets --with-xmlrpc --enable-zip --with-zlib --enable-inline-optimization --enable-mbregex --enable-opcache --enable-fpm --prefix=/usr/local/php --with-xpm-dir=/usr --with-vpx-dir=/usr
make install

# Fast CGI
sudo rpm --import
sudo rpm -ivh
sudo yum install mod_fastcgi

# Disable mod php5
mv /etc/httpd/conf.d/php.conf /etc/httpd/conf.d/php.conf.disable

# Create a script in /var/www/cgi-bin/php.fcgi

# Shell Script To Run PHP5 using mod_fastcgi under Apache 2.x
# Tested under Red Hat Enterprise Linux / CentOS 5.x
### Set PATH ###
### no editing below ###
exec $PHP_CGI
# Make executable
chmod +x /var/www/cgi-bin/php.fcgi

# Add the handler in your apache vhost or directory config

        Options +Indexes -IncludesNOExec
        AllowOverride All
        AddHandler php5-fastcgi .php
        DirectoryIndex index.php index.html
        Order deny,allow
        Allow from all
        AddType text/html .php
        Action php5-fastcgi /cgi-bin/php.fcgi

# Alias is great for sharing subdirectories on a single host e.g. host/www1, host/www2
Alias /callcenter /var/www/html/app

# Or a virtual host

#    ServerName
#    ServerAdmin webmaster@localhost
#    DocumentRoot "/var/www/html/callcenter"
#    LogLevel debug
#    ErrorLog logs/callcenter-error.log
#    CustomLog logs/callcenter-access.log combined

Tuesday, January 29, 2013

Using Thrift IDL as a Rest Interface Definition Language

Deriving a thrift idl file from existing code: Using python introspection can produce a thrift interface definition file from class members and methods. It could also be a natural way to describe rest services in the native language. One could then use this file to generate restful services boilerplate code on a target web platform of your choice for example django.

 I have a module called introspection that uses inspect to fetch class methods as well as their argument specs in a way that could produce python code that calls these methods with the correct imports. The module retrieves members as well. I would like to use this to transform nltk.corpus.reader.wordnet members into rest services and I will use this as the base for writing a thrift definition file from. I would also like to define my rest services as a python class RestServices and have the thrift file automatically generated along with
the skeleton django project.

Thrift can define enumerated constants, structs, complex structures in the form of json constants. I plan on using these to define the object types that I will then serialize and deserialize using either the thrift transport stack or jsonpickle. The aim being to make rest post requests with serialized json objects in the request headers that will form parameters to rest functions and produce objects in the service code. 

These functions will then return json responses that might contain standard exception structures.

The django specific build instructions will be placed in the doc style comments in the thrift file. Every method will have doc style or inline comments that will contain base uri and project specific instructions will also be stored using doc style comments.

  • // @base_uri: wordnet/
  • /** @target django
  •  *   @project myproject
  •  *   @app myapp
  •  *   @settings path/to/my/ */

 I will generate a django skeleton with url mappings starting with "/wordnet/method_name" and use the comments to build the skeleton django project.

The frontend javascript application interface will enable you to call rest methods to produce json objects stored in the client side javascript container. You will also be able to make changes to these objects. You will then be able to use these objects as parameters to other rest methods.

I expect to run into a few hurdles especially when deriving method parameter structs as python isn't a typed language. I thought about using a semantic approach to determining which functions produce the expected object and using runtime introspection to produce the structure definition.

This is because I believe in harnessing the thought processing that goes into choosing names.

In cases where this is not possible a notification to review the definition file will be raised. The definition file will always need to be reviewed anyway right.

This implementation just serves to use introspection and semantics to the best of their abilities to produce a base definition to work from. I'm designing this with other frameworks in other languages in mind to make it possible to produce cakephp rest skeleton projects. The running name for this project is restify.

Wednesday, March 14, 2012

hubot as a developers, medical lab scientists, math teachers, biotechnology professors, chemical engineers, mechanical engineers assistant

My forked version of Hubot will be a bot that will be able to execute hubot-scripts scripts as well as keep you entertained with jokish commentary using AIML and leave you with a haha type of 'is this really a robot' feeling.
I was busy setting up my development environment yet again, started getting my new symphony 2 application rolling and hit a blocker in the form of a permissions error. So I wondered to myself ..., what if I could ask hubot to auto-correct this for me. Wouldn't that be awesome. Hubot has a plugin architecture and customized 'tell hubot what to do' scripts can be created.

Ill be sampling this coffee script/node.js wonder with aims of getting my own version of hubot, what I'm going to call codi the developers assistent, out there.

Here are a few cases I think hubot will one day be able to solve easily:

Medical Laboratory Scientist.
You own a lab and you would like to be able to tell your bot to creat a summary and a spreadsheet using locally harvestable information all nicely indexed with some form of slocate.
Well you just ask via chat session and hubot replies with a path to a folder containing your stuff what he just prepaired for you.
Or maybe just open it for you.

Biotechnology Proffessor.
You need to access those library databases with the latest articles pertaining to yeast fermentation techniques and want to be absolutely sure you've scoured every site in existance with the info you need.
Well a hubot chat session for that might look like:

You: Find yeast fermentation

Hubot: found - information, free, free, cost $..
Would you like me to open your auto compiled summary?

You: sure (You open and view the report/summary)

Hubot: there are propriety sources, I've analyzed your budget
and deduced that you will have enough to eat after purchasing
Would you like me to purchase $400 for total of 3 years worth of subscription using your paypal account?

You: why not

Hubot: you would miss crucial information in your summary report sir

You: go for it

Hubot: opening autocompiled summary according to standard proff sumary report type 16A...

Probably easiest place to apply hubot to it's full potential. With the list of already available coffee scripts hubot will possibly be able to save you, the developer, loads of time with auto correcting tedeous fixables to known issues. This developer recommends a uniform shared knowledge base used to store and retrieve known fixes to a global catalogue of systems and their known error outputs. It's a bit ambitious I know. This could start locally for now and eventually make it through the proxy caches out there onto it's very own dedicated cache server, a web host.

An example of a chat session for how hubot might alleviate the stresses of setting up development environments might look like:

You: yooo codi, set up my symfony in the spaz room (where spaz room tag is known by codi from previous chats to be in /home/charl/spaz)

Codi: found symfony2, downloading, extracting, read docs, check config
you have a symfony autobuilt in /home/charl/spaz
check documentation on how to build your first app

You: Thanks

Codi: No problem, opens documenbtation location in a new window

You - (build along and see that the cache directories couldn't be created, permissiosn error)

You: Whats wrong with my symfony app in spaz?

Codi: Slight miss configuration, your logs indicated permissions for cache error
This is now corrected as per FilesystemKB #12388 ...

You eeerm

Codi: oh ya, opening the browser (based on your previous chat sessions stats indicate that no actions should be automatically performed so ...)

So this will have dramatically cut down dev time spent on configuring the environment by about 15 min to 2 seconds depending on your season and memory recall time. Either way this will enable fix once, never again anywhere.

Another tangent, hold on. Fix once never again, anywhere is a concept of transfering knowledge for a solution to a certain problem in it's problem domain like dns or apt entries/repos are propagated.

And were back... With a plugin architecture or this Codi will probably have learnt how to completely conigure symony2 properly from the ground up and seasoned or not, quick memory recall or not everyone is now immediately within reliable symony setup ready to let codi learn how to possibly write your apps for you using something I call idea (integrated design execution aid). A coffee script plugin I plan on writing to add automated programming tasks like stub creating, skelleton from a workflow for any language, using github for code learning and statistical analysis and neural net type decision making for implementing code from a markdown format specification language.

I've started a little project with a semi spec detailing the finer, well details of how I plan on
going about implementing this

  • automated browsing, selenium
  • environment awareness via provisioning and env vars, puppet, capistrano
  • log trolling, e.g searching apache log for php errors into lucene
  • and actioning it by callable urls returned from apache lucene
  • Adhearance to strict business process/rules/protocols using apache ODE
  • custom funny developer guy that does what you ask aiml to
You can find hubot on

Stay tuned.

Wednesday, March 11, 2009

Web Workflows - An Automated Approach to Web Browsing

The web needs an easy to read language for defining web tasks.

I propose XML based Web Workflows:
<input name="q">Straw Berry<input>
<click name="btnG"></click>

Execution of this web workflow would result in a document with the search results attached.

I have a working open source implementation of this in PHP via Chisimba.

This module interprets XML based Web Workflows to programtaically browse
the web to return a particular URI endpoint. The language includes syntax like

<input name="q">Straw Berry<input>
<click name="btnG"></click>

The latter represents an easy to understand, high level, procedural language for
automated web document retrieval.

This module will also allow you to specify login credentials to automate loging into sites in order to access protected resources. This module will be used by the librarysearch module to assist with document retrieval on clusters of hosts.

This implementation is completely portable and because it accepts the web workflow as input to produce the document could run as a standalone app like curl or maybe even as a curl extension.

This would contribute hugely to federated search efforts where an API isn't available for a certain host.

Screen scrapping tasks have mostly been executed as a very unordered and messy combination of curl/lynx -d/wget requests. This process needs to be formalized by the web community like w3c

The java guys already have this available to them see here but it's too proprietary in the sense that you can only get this up and running by creating the engine from within the java code.

Webstats watchers and web ad agencies aren't going to like the adoption of a formal method for robots to surf their sites to effectively carry content to users. Perhaps this will create an opportunity for other revenue models to surface.

Comments anyone?

Wednesday, March 4, 2009

PHPUnit Test Case Builder for Chisimba

I have started work on a very nifty module for Chisimba that will allow any Chisimba module developer to create a complete PHPUnit test case for their module.

The Chisimba PHPUnit Test Case Builder generates PHP test case classes for any Chisimba module.

Module Outputs
It implements a code pattern analyzer that classifies code into logical groups like Data Management, Utility, Callback and Display methods.

These code groupings are then transferred to a single PHPUnit skeleton class divided into the logical associations. All the standard, known asserts are added to the code making it ready for phpunit to process on the fly.

It creates a file with the following checks:
  • Uses the register.conf file to check against the current environment for dependencies.
  • Uses the live database structures to construct wrapper code for individual table field checks.
  • The Data Management Tests are broken down logically into Add/Edit/Delete sections and uses the table field checks to verify expected data (Uses table field data type to produce test data)
This provides a solid base for the developer to start unit testing from as the last step would be to peruse the skeleton and logically select the proper assertions to use. The benefits of having logically inclined test cases include the ability to make core changes and see the rippling effects, identify possible problematic areas (e.g. fields as arguments in insert/edit that do not match up to database field names), identify optimization targets and improve overall integrity of your module.

Future Plans
For display/ui testing, there are plans to integrate and automate selenium test code generation as PHPUnit already has some form of support for this.

Stay tuned for progress.