This is my personal blog to write about all the technology works that I would like to share.
Don't wanna be here? Send us removal request.
Text
Evolutionary Java memory leak detection tool
Here is a very nice article about the evolutionary java memory leak identification tool Plumbr, I'll be sure to try this out on my next project,
http://java.dzone.com/articles/plumbr-30-evolutionary-java-leak
0 notes
Text
Hybris Layered architecture
Hybris Layered architecture
0 notes
Text
Wordpress blog migration - new domain
Steps to migrate a wordpress blog from one domain to another domain on a different server.
Permalink - URL change
changed blog permalink URL structure – an update from date stamped links to permalinks
Creating new wordpress site and switching
Add a subdomain in EASy DNS for blog.xxxxx.com
A new folder should be created for blog.xxxxx.com
Copy over all the wordpress content from currentsite.com folder to blog.xxxxx.com
Create a new database for the new domain (use the same database name and username/pwd as the old site)
Backup the existing wordpress database
Upload the Search and Replace wordpress script into the new site folder where wp-config.php is located
Go to http://blog.xxxxx.com/searchreplacedb2.php.
Go thru the wizard and change all the URLS from http://www.currentsite to http://blog.newsite
Make a copy of the backup and edit the database to change all the URLS from currentsite to blog.newsite
Edit blog.xxxxx.com wp_config file to point to the new database
Test blog.xxxxx.com and resolve any issues that comes up
Once everything looks good in testing, follow the rest of the steps
Important: go to Administration > Settings > General and change the url (both of them) to blog.xxxxx.com
Setting Redirect and Permalink changes
Edit .htaccess file in currentsite.com to set the Redirect from old domain to new one
0 notes
Text
Ehcache configuration
Ehcache configuration settings.
Configuring cache to work with clustering
http://www.ehcache.org/documentation/user-guide/configuration
0 notes
Text
Hibernate caching - First leve, second level (using ehcache) and query caching
This is a nice tutorial about the hibernate cache at different levels
http://www.tutorialspoint.com/hibernate/hibernate_caching.htm
0 notes
Text
Editing hybris Emails in administration console
Modifying Order Confirmation email in hmc
go to hmc/hybris
Go to WCMS and Select PAges
Search and open the email page that you want to edit
Go to Properties tab and click on the search icon of the Page template field
List of templates will open in the new window
Select the email template that you want to edit and click on the edit icon at the far left corner of that row.
Go to Email tab and clikc on the search icon for HTML Email Template
Search window will open up again with all the templates used for the email , like subject, body, top, footer etc
select the body template and click on the edit icon again at the far left corner
In the general tab the email content will be there, modify it as you wish.
0 notes
Text
DB Connection error in hybris-
We face this error from hybris platform on several scenarios, this happens if we do anything with the database settings. Today I changed the drivers from jtds driver to MSSQL server, I faced this error because I missed to put back the old driver, I also faced this when the connection parameters were not set properly, mostly this happens when the removeCart api is being called.
INFO | jvm 1 | srvmain | 2013/04/24 01:37:04.748 | SEVERE: Exception sending context initialized event to listener instance of class de.hybris.platform.spring.HybrisContextLoaderListener
INFO | jvm 1 | srvmain | 2013/04/24 01:37:04.748 | java.lang.Error: connection was NULL, this should NEVER occur. (txbound: true)
INFO | jvm 1 | srvmain | 2013/04/24 01:37:03.874 | ERROR [WrapperSimpleAppMain] [Utilities] invalid db connection params com.microsoft.sqlserver.jdbc.SQLServerDriver::jdbc:sqlserver://172.18.212.31:1433;responseBuffering=adaptive;loginTimeout=10;databaseName=hmhstore-v1;... - check platform properties (or database)!
INFO | jvm 1 | srvmain | 2013/04/24 01:37:03.874 | java.lang.Error: connection was NULL, this should NEVER occur. (txbound: true)
0 notes
Text
Hybris and JTDS JDBC Database driver for MSSQL Server
Inorder to resolve some of the Database connectivity issues that we are facing, I updated Hybris ecommerce platform to use the jtds driver. But once I did that there were issues around getting time from the platform, hence the cronjob schedule were not working and the cron jobs were not executing. This is because of the Bigint datatype conversion.
INFO | jvm 1 | srvmain | 2013/04/24 01:07:29.137 | INFO [WrapperSimpleAppMain] [MetaInformationManagerEJB] error getting system init/update timestamp: Unable to convert between net.sourceforge.jtds.jdbc.ClobImpl and BIGINT. - ignored
0 notes
Text
Hibernate JPA Core features
Using Session is hibernate specific, using EntityManager is much better as you stick to the JPA specification instead of hibernate.
Hibernate Caching:
First level caching is context based and it uses the Session
Second level cache are for the entities that doesn't change between use cases
Query Cache: to cache query result set hibernate.cache.use_query_cache="true"
Enabling second level cache for the entities by setting the following parameter in the persistence.xml
shared-cache-mode =
ENABLE_SELECTIVE (Default and recommended value): entities are not cached unless explicitly marked as cacheable.
DISABLE_SELECTIVE: entities are cached unless explicitly marked as not cacheable.
ALL - Cached always
NONE
cache concurrency strategy
(NONE, READ_ONLY, NONSTRICT_READ_WRITE, READ_WRITE, TRANSACTIONAL)
@Entity @Cacheable @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE) public class Forest { ... }
Transactional: Use this strategy for read-mostly data where it is critical to prevent stale data in concurrent transactions,in the rare case of an update.
Read-write: Again use this strategy for read-mostly data where it is critical to prevent stale data in concurrent transactions,in the rare case of an update.
Nonstrict-read-write: This strategy makes no guarantee of consistency between the cache and the database. Use this strategy if data hardly ever changes and a small likelihood of stale data is not of critical concern.
Read-only: A concurrency strategy suitable for data which never changes. Use it for reference data only.
flush() is used to synchronize the object with database and evict() is used to delete it from cache.
contains() used to find whether the object belongs to the cache or not.
Session.clear() used to delete all objects from the cache .
Suppose the query wants to force a refresh of its query cache region, we should callQuery.setCacheMode(CacheMode.REFRESH).
hibernate-cfg.xml
<property name="hibernate.cache.provider_class" value="net.sf.ehcache.hibernate.SingletonEhCacheProvider" /> <property name="hibernate.cache.provider_configuration" value="/ehcache.xml" /> <property name="hibernate.cache.use_second_level_cache" value="true" />
EHCache has its own configuration file,ehcache.xml, which should be in the CLASSPATH of the application
<diskStore path="java.io.tmpdir"/> <defaultCache maxElementsInMemory="1000" eternal="false" timeToIdleSeconds="120" timeToLiveSeconds="120" overflowToDisk="true" /> <cache name="Employee" maxElementsInMemory="500" eternal="true" timeToIdleSeconds="0" timeToLiveSeconds="0" overflowToDisk="false" />
Every non static non transient property (field or method depending on the access type) of an entity is considered persistent, unless you annotate it as @Transient.
@Id @GeneratedValue(strategy=GenerationType.SEQUENCE, generator="SEQ_STORE")
@Id @GeneratedValue(strategy=GenerationType.IDENTITY)
Eager Fetch and Lazy Fetch:
Default is eagerly fetched. It is equivalent to not defining any annotation.
@Basic int getLength() { ... } // persistent property
@Basic(fetch = FetchType.LAZY) String getDetailedComment() { ... } // persistent property
To enable property level lazy fetching, your classes have to be instrumented: bytecode is added to the original class to enable such feature
@Lob indicates that the property should be persisted in a Blob or a Clob depending on the property type: java.sql.Clob, Character[], char[] and java.lang.String will be persisted in a Clob. java.sql.Blob, Byte[],byte[] and serializable type will be persisted in a Blob.
Embeddable Object:
@Entity public class User { private Long id; @Id public Long getId() { return id; } public void setId(Long id) { this.id = id; } private Address address; @Embedded public Address getAddress() { return address; } public void setAddress() { this.address = address; } } @Embeddable @Access(AcessType.PROPERTY) public class Address { private String street1; public String getStreet1() { return street1; }
Attribute Overrides while using Embeddable:
@Entity public class Person implements Serializable { // Persistent component using defaults Address homeAddress; @Embedded @AttributeOverrides( { @AttributeOverride(name="iso2", column = @Column(name="bornIso2") ), @AttributeOverride(name="name", column = @Column(name="bornCountryName") ) } ) Country bornIn; ... }
Joins:
@MapsId @OneToOne @JoinColumn(name = "patient_id") Person patient;
@MapsId("userId") @JoinColumns({ @JoinColumn(name="userfirstname_fk", referencedColumnName="firstName"), @JoinColumn(name="userlastname_fk", referencedColumnName="lastName") }) @OneToOne User user;
Many to one:
@ManyToOne( cascade = {CascadeType.PERSIST, CascadeType.MERGE} )
@JoinColumn(name="COMP_ID") public Company getCompany() { return company; }
Ordering:
@OrderColumn(name"orders_index") public List<Order> getOrders() { return orders; }
Persistence with Cascading:
CascadeType.PERSIST: cascades the persist (create) operation to associated entities persist() is called or if the entity is managed
CascadeType.MERGE: cascades the merge operation to associated entities if merge() is called or if the entity is managed
CascadeType.REMOVE: cascades the remove operation to associated entities if delete() is called
CascadeType.REFRESH: cascades the refresh operation to associated entities if refresh() is called
CascadeType.DETACH: cascades the detach operation to associated entities if detach() is called
CascadeType.ALL: all of the above
Hibernate Search Configuration in Spring:
<!-- HSearch settings --> <property name="hibernate.search.default.indexBase" value="${hibernateIndexDir}" /> <property name="hibernate.search.default.directory_provider" value="${hibernateDirectoryProvider}" />
@Indexed(index = "myentity") @Table(name = "entity_table")
@Field(index = Index.TOKENIZED, store = Store.YES) private String myString;
Versioning for optimistic locking
The version property will be mapped to the OPTLOCK column, and the entity manager will use it to detect conflicting updates (preventing lost updates you might otherwise see with the last-commit-wins strategy).
@Entity public class Flight implements Serializable { ... @Version @Column(name="OPTLOCK") public Integer getVersion() { ... } }
0 notes
Text
JVM Performance Tuning
Eden.................Survivor S1 S2.......... Old
When Eden fills up, GC stops the JVM goes and move the live objects to survivor space.
Young generation should be large enough to your application memory needed and the # of concurrent request
example application needs 10 mb and you can get a 100 concurrent hits then the young gen space should be 1000MB
General rule - give your new gen as much space as you can.
Survivor space should be large enough to hold 10MB active objects per request + tenuring ones
Specify a Tenuring threshold, such that the long lived objects tenure fast.
Old Generation Collector
-XX: +UseSerialGC - Collection under a single thread in Old gen
-XX: +UseParallelGC - Single threaded in old generation and multiple threaded in Eden gene
-XX: +UseParallelOldGC - Multiple threaded in old gen and eden
Low Pause Collectors:
-XX: +UseConcMarkSweepGC - Only for Old Gen not for Eden, it runs while the application threads are running and tries to collect most of the garbages. only thing that
runs in parallel with the application.
-XX: +UseG1Gc - part of java 8, have to check the efficiency of it.
Adaptive Size policy - throughput collectors can automatically tune themselves when you specify these.
-XX: UseAdaptiveSizePolicy
--XX: MaxGCPauseMillis = 100
-XX: GCTimeRatio=20 means 20 parts of your CPU will be used for GC, rest for application.
Always tune the young generation first and then the old gen
Enable -XX: PrintGCDetails, -XX: PrintHeapAtGC, -XX: PrintTenuringDistribution and watch the survivor size
Eden size bigger the better, watch survivor and tune it.
-XX: PrintHeapAtGC
Eden 4096000k 0% used
from 23200k 77% used
to 23200k 0% used
young gen is eden + survivor
from eden it is going into survivor space, it should not be 100%, if its 100 then survivor is not able to fit the single eden collection
the live objects will then go to old gene, we really dont want to do that
-XX: PrintTenuringDistribution
things to look
Number of ages - how many objects are surviving the GC after every run, number of time GC has to run to free up young gen.
Size distribution in ages - this should be strongly declining that more and more objects are getting freed up, if not you've a memory leak.
age 1 : 1232313 bytes, 129329 total
age 2 : .....
age 3 : ......
General:
For bulk application use - large eden space and use adaptive policy
for rest of it - large edn and adaptive policy or concurrent mark and sweep
Use concurrent mark and sweep CMS
-verbosegc and -XX: +PrintGCDetails - if the result doesn't show any full gc call, then we are done. If it shows that CMS was not bale ot run fast enough for your
application and it will have to run the full GC and take a lot of time.
tune the young generation
Tuning old Gen:
Observe minimum and maximum working set, check Full GC #s under stable state and under load - you can force Full GC to cheating on the throughput garbage collector.
Overprovision the Full gc # with 25%-33% - it gives the cushion for CMS
CMS fragments the free memory
Tuning:
If Responsiveness is still no good enough - because there are now too many live objects in young gen, then flip the strategy and reduce new size, survivor space and
renuring threshold so that they go into old gen faster and concurrent mark and sweep can take care of it
sometimes the best bet is that JVM cannot handle anymore and you split your services into multiple JVMs
Synchronized keyword is a read and a write barrier, sometimes you dont need both the barriers.
Volatile keyword is a half-barrier, try using it.
1 note
·
View note
Text
l10n and i18n
First and foremost thing for localization and i18n is to create resource bundles and have every text on the website accessed thru' the localized resource bundle
We can also split the resource bundles by major components in the website to make sure that we load the resource selectively instead of loading them all at once.
We should retrieve the locale of the user dynamically or thru' his user info and create a locale Java object, pass it to every object which would have locale specific data.
Separate your resource bundles as Global bundle xyz.properties and Language bundle xyz_zh.properties (Chinese) and Region only bundles like xyz_zh_cn.properties (China only)
Every String, Formatting (Number, Date, Symbols) objects that the end user sees has to be created by passing Locale as the parameter.
Get Resource bundle like,
private static void test(Locale locale) { ResourceBundle rb = ResourceBundle.getBundle( "RBControl", locale, new ResourceBundle.Control() {
Data to be considered for Localization and Internationalization:
Currency
Language
local custom content
symbols
postal code
character encoding - UTF (i18n)
audio content
subtitles
Government assigned number SSN (i18n)
titles, names
Images and colors
weights and measures (i18n)
data time format (i18n)
timezone (i18n and countries like US l10n)
Data like quantity, Currency, Price formatting can be stored in a separate Resource bundles, you might want to load all of the GUI labels for an order entry window into a ResourceBundle called OrderLabelsBundle.
Currency Formatter:
static public void displayCurrency( Locale currentLocale) { Double currencyAmount = new Double(9876543.21); Currency currentCurrency = Currency.getInstance(currentLocale); NumberFormat currencyFormatter = NumberFormat.getCurrencyInstance(currentLocale); System.out.println( currentLocale.getDisplayName() + ", " + currentCurrency.getDisplayName() + ": " + currencyFormatter.format(currencyAmount)); }
The output generated by the preceding lines of code is as follows:
French (France), Euro: 9 876 543,21 € German (Germany), Euro: 9.876.543,21 € English (United States), US Dollar: $9,876,543.21
Number Formatter:
static public void displayNumber(Locale currentLocale) { Integer quantity = new Integer(123456); Double amount = new Double(345987.246); NumberFormat numberFormatter; String quantityOut; String amountOut; numberFormatter = NumberFormat.getNumberInstance(currentLocale); quantityOut = numberFormatter.format(quantity); amountOut = numberFormatter.format(amount); System.out.println(quantityOut + " " + currentLocale.toString()); System.out.println(amountOut + " " + currentLocale.toString()); }
This example prints the following; it shows how the format of the same number varies with Locale:
123 456 fr_FR 345 987,246 fr_FR 123.456 de_DE 345.987,246 de_DE 123,456 en_US 345,987.246 en_US
Time Formatter:
DateFormat timeFormatter = DateFormat.getTimeInstance(DateFormat.DEFAULT, currentLocale);
Date Formatter:
Date today;
String dateOut;
DateFormat dateFormatter; dateFormatter = DateFormat.getDateInstance(DateFormat.DEFAULT, currentLocale); today = new Date(); dateOut = dateFormatter.format(today); System.out.println(dateOut + " " + currentLocale.toString());
0 notes
Link
Good detailed explanation on byte allocations and the efficiencies that can be brought in throughput and memory management by simple changes in programs. Very clear practical explanation of Java memory management and tuning it for optimized performance.
http://www.infoq.com/presentations/JVM-Performance-Tuning-twitter-QCon-London-2012
0 notes
Link
LEVERAGING THE CISCO UCS ADVANTAGE FOR YOUR RED HAT ENTERPRISE VIRTUALIZATION DEPLOYMENTS
It is imperative to construct a virtualization framework on which to build your evolving cloud stack. Many organizations are challenged by limited resources in traditional x86 platforms, and increasingly high software licensing costs for proprietary hypervisor and guest management solutions. Learn how Cisco UCS was built to provide a more consolidated virtualization foundation for Red Hat Enterprise Virtualization. With its unique virtual fabric extender architecture (VM-FEX), Cisco UCS provides increased flexibility, highly scalable server access and simplified operations. The combined solution delivers unmatched savings for architects looking to build an open source cloud foundation.
Speakers: Roger Barlow, Product manager, Cisco Systems, Inc.
Roger Barlow is a Product Manager in Cisco's Data Center Group (DCG) managing the Unified Compute System (UCS) Product. Roger has a technical background with 15 years of experience in Data Center Operations. He has almost 20 years in the technology industry with notable companies such as EMC, Opsware/HP, and now Cisco. Roger's focus at Cisco is on the UCS virtualization and Cloud management Ecosystem, driving the UCS Virtualization & Cloud Strategy and ensuring the product's success with partner technologies.
James Rankin, Senior Solution Architect, Red Hat James Rankin works as a senior solution architect at Red Hat. In this role, he architects server virtualization and virtual desktop infrastructure (VDI) solutions in a variety of vertical markets. James also performs on-site proof-of-concept Red Hat Enterprise Virtualization deployments and teaches “Red Hat Enterprise Virutalization Acceleration” classes to Red Hat partners. James has experience with major virtualization solutions on the market and with most major storage vendors. He has been designing virtualization and storage solutions for more than five years. He received his master’s degree in Business Information Technology from DePaul University.
0 notes
Link
Amazon.com CTO Werner Vogels explains how Amazon has become a platform provider, and how an increasing number of diverse businesses are built on the Amazon.com platform and the lessons learnt.
0 notes
Link
The Non functional requirements are always side lined but these are very important to develop an
There are requirements that specifies criteria that can be used to judge the operation of a system. These tend to be global qualities of a software system. some examples are usability, performance, security and flexibility in a given system. The attached link provides a very good documentation about Non Functional Requirements.
This problem of attempting to work quality in at the end of the development phase has been around as long as we have been doing software development.
Either the non-functional requirements were not specified (in time), or compromised without explicit attention to the trade-offs involved. Not paying attention to eliciting, documenting and tracking non-functional requirements makes it harder to prioritize and make trade-offs between the quality of the product, the cost to develop and enhance it, and the time-to-market of current and future releases. Without quality targets to guide the architects and engineers, design choices are arbitrary, and it is hard to assess the system during architecture and design reviews and system test.
0 notes
Link
Very nice presentation about how architects evolve and the need for architects to not give up their passion for coding, while taking care of creating the right architecture and making sure that all the standards and best practices are setup and the team follows it.
0 notes