Wanted to learn more about testing with Arquillian and how to use ShrinkWrap API to create deployment artifacts?
Actually, a lot of useful information can be found in the book Continuous Enterprise Development in Java. Although all the information can be found on the Internet this book has consistency in how this information present. So, you shouldn't strive to find any more information regarding the API.
This blog is a collection of minds around linux, java, javascript, etc. Looking for great opportunities.
воскресенье, 6 декабря 2015 г.
пятница, 4 декабря 2015 г.
Arquillian integration testing with DB2 and Wildlfy 8.2.1
Hi!
You have decided that you need integration tests in addition to your unit tests. How can we automate integration testing of J2EE code?
Arquillian to rescue! Arquillian is integration testing framework for Java EE. To get started you need an Arquillian library, adapter container, container and test :-).
Is sounds a lot like to configure. Well no so many by hand. For the integration test I have chosen Wildfly 8.2.1. Also I want that Arquillian itself managed starting and stopping of wildfly.
For those who are impatient look into github: https://github.com/chernykhalexander/arquillian_db2_J2EE.git.
The easiest way to add the arquillian in pom.xml is to you jBoss forge tool. Forge is a special utility to manage java projects. Or you can look into pom.xml on github.
This pom also:
1) downloads the wildfly 8.2.1 distribution and expand it into the target directory.
2) copy the IBM DB2 driver into the expanded wildfly
3) uses maven-failsafe-plugin to run integration-tests. It's execution is bounded to integration-test and verify goals. If you need to execute integration test make sure integration test files end with "IT".
How to define the JNDI datasource? This is what wildfly-ds.xml responsible for.
Other interesting part is integration test itself. Again, you can ease the creation of test with forge tool.
When creating the test pay attention to the @deployment method. You need to specify all classes and files your test uses.
After all have been done mvn integration-test and that is all! Happy testing.
You have decided that you need integration tests in addition to your unit tests. How can we automate integration testing of J2EE code?
Arquillian to rescue! Arquillian is integration testing framework for Java EE. To get started you need an Arquillian library, adapter container, container and test :-).
Is sounds a lot like to configure. Well no so many by hand. For the integration test I have chosen Wildfly 8.2.1. Also I want that Arquillian itself managed starting and stopping of wildfly.
For those who are impatient look into github: https://github.com/chernykhalexander/arquillian_db2_J2EE.git.
The easiest way to add the arquillian in pom.xml is to you jBoss forge tool. Forge is a special utility to manage java projects. Or you can look into pom.xml on github.
This pom also:
1) downloads the wildfly 8.2.1 distribution and expand it into the target directory.
2) copy the IBM DB2 driver into the expanded wildfly
3) uses maven-failsafe-plugin to run integration-tests. It's execution is bounded to integration-test and verify goals. If you need to execute integration test make sure integration test files end with "IT".
How to define the JNDI datasource? This is what wildfly-ds.xml responsible for.
Other interesting part is integration test itself. Again, you can ease the creation of test with forge tool.
When creating the test pay attention to the @deployment method. You need to specify all classes and files your test uses.
After all have been done mvn integration-test and that is all! Happy testing.
вторник, 1 декабря 2015 г.
Vagrant. Install on CentOS 7.1
Here is a small tutorial on how to install Vagrant on CentOS 7.1:
- Download Vagrant:
wget -q https://dl.bintray.com/mitchellh/vagrant/vagrant_1.7.4_x86_64.rpm
- Setup Vagrant:
yum localinstall vagrant_1.7.4_x86_64.rpm
- Check Vagrant version:
[root@localhost ~]# vagrant --version Vagrant 1.7.4
- Download VirtaulBox:
- Setup VirtualBox:
sh VirtualBox-5.0.6-103037-Linux_amd64.run
- Setup dependencies:
yum install binutils gcc make patch libgomp glibc-headers glibc-devel kernel-headers kernel-PAE-devel dkms kernel-devel
- Setup kernel module:
KERN_DIR=/usr/src/kernels/3.10.0-229.14.1.el7.x86_64export KERN_DIR/sbin/rcvboxdrv setupStopping VirtualBox kernel modules [ OK ]
Recompiling VirtualBox kernel modules [ OK ]
Starting VirtualBox kernel modules [ OK ]
вторник, 24 ноября 2015 г.
DB2 CLP. Disable autocommit
To disable autocommit is db2 clp write the following command:
UPDATE COMMAND OPTIONS USING c OFF.
To check the results write the following command:LIST COMMAND OPTIONS
вторник, 3 ноября 2015 г.
Linux Remote Desktop in Docker (part 1)
Here are my investigations about remote technologies in Linux.
My aim - to create docker container with firefox and access it remotely over the network.
This is kind of manual testing environment.
My aim - to create docker container with firefox and access it remotely over the network.
This is kind of manual testing environment.
пятница, 16 октября 2015 г.
Docker. Network settings
Docker network settings are the following:
--dns
for setting up DNS
server, which will be used by container.
-- dns-search.
FQDN part without host name.
If --dns or --dns-search is not
given, then the /etc/resolv.conf ile of the container will be the
same as the /etc/resolv.conf file of the host the daemon is running
on.
-h
–hostname allows to setup the hostname of the container. The related record will be added to the etc/hosts.
--link. Allows to setup the connection to other container. Knowledge IP
of other container is not required. Only the name of the container.
To assign the name to the container one should use --name flag.
For example, there are two containers: web and db. To create the link between containers one should stop the web container and start it with --name flag like this:
#
docker run -d -P --name web --link db:db <image> startserver.sh
By using docker -ps you can see the links between containers.
Also in containers env variables and /etc/hosts are altered.
Also, container can bind the ports to the host ports. Use-p flag:
1. docker
run -p
IP:host_port:container_port
2. docker run -p
IP::container_port
3. docker run -p
host_port:container_port
When necessary containers can be moved to another subnet. For this the docker daemon should be used with —bip flag.
Docker. Limit container resources.
Here is how to limit the containers resources:
Limit by CPU:
Use docker run -c option.
Limit RAM:
docker run -m 1024m
Limit by HDD:
There is no universal way to achieve it. It is recommended to use devicemapper storage
driver.
Also, by default the size of container is 10 Gb. This can be tuned by changing the parameter dm.basesize.
пятница, 9 октября 2015 г.
Docker. Storage driver for CentOS 7.1
By default, when starting Docker container following message appears:
Usage
of loopback devices is strongly discouraged for production use.
Either use `--storage-opt dm.thinpooldev` or use `--storage-opt
dm.no_warn_on_loop_devices=true` to suppress this warning.
Reading the man docker page reveals the following:
The
only backend which currently takes options is devicemapper.
четверг, 8 октября 2015 г.
Docker. Application logging patterns inside container
Here are three patterns on how to get the logs from the Docker container.
First, use -v option to mount a file location inside the container to the location inside the host file system. The -v option gives you flexibility on where to redirect files.
Second, use centralized logging server. For example, Kafka queues for further processing.
Third, you can use shared volumes from another container to pull logs into another running container. This way can save up processing resources. Imagine that every system runs a service to send logs, than it would be waste of resources to send logs from every container. Instead pull the logs into one container and use a single logging service to send logs.
First, use -v option to mount a file location inside the container to the location inside the host file system. The -v option gives you flexibility on where to redirect files.
Second, use centralized logging server. For example, Kafka queues for further processing.
Third, you can use shared volumes from another container to pull logs into another running container. This way can save up processing resources. Imagine that every system runs a service to send logs, than it would be waste of resources to send logs from every container. Instead pull the logs into one container and use a single logging service to send logs.
Docker. Logs. CentOS 7.1
To view the container logs you should run `docker logs <container id>`.
By default container logs are located in
/var/lib/docker/containers/[CONTAINER
ID]/[CONTAINER_ID]-json.log.
Logs are constantly increasing. So they should be cleaned up on a timely base.
In Docker 1.8 and up there is built in logs rotation mechanism.
The current best practice for
rotation of Docker logs is to have logrotate use the copytruncate
method to copy the logfile and then truncate it in place.
For this create the file `/etc/logrotate.d/docker-container`
with the following content:
/var/lib/docker/containers/*/*.log
{
rotate 7
daily
compress
size=1M
missingok
delaycompress
copytruncate
}
Update the logrotate config:
logrotate -fv
/etc/logrotate.d/docker-container.
That is all.
There is a possibility to redirect logs to different backends with parameter --log-driver.
By default json driver is used. In case of other log drivers the built in docker command 'docker logs` stops working.
Docker. Install on CentOS 7.1
A lot of blog posts and articles today are written about Docker.
What is this all about?
Docker is an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization on Linux, Mac OS and Windows.
What is this all about?
Docker is an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization on Linux, Mac OS and Windows.
вторник, 18 августа 2015 г.
Recent read books
Just to track my reading list. Here are what I have finished reading recently:
Both books has been written by Jeff Atwood, the creator of StackOverflow. He is great at writing and has spacious mind. Just read the TOC and see if you want to read that books.
Both books has been written by Jeff Atwood, the creator of StackOverflow. He is great at writing and has spacious mind. Just read the TOC and see if you want to read that books.
понедельник, 10 августа 2015 г.
GWT 2.7.0. Deserialize json on server with Autobean framework
One can use JSON libraries to deserialize json to POJO. How to do it with GWT?
GWT ships with Autobean framework. It allows you to serialize the json on client. But, we can use it on the server side too.
Here is the magic:
1) define the the interface which extends the AutoBeanFactory
2) on the server side use beanFactory = AutoBeanFactorySource.create(Your_Factory_class.class);
AutoBeanFactorySource is in com.google.web.bindery.autobean.vm.AutoBeanFactorySource package. One clue is that it is experimental. So check how it is working in your environment.
3) To decode json to POJO use:
AutoBean<Your_POJOInterface> bean = AutoBeanCodex.decode(
beanFactory, Your_POJOInterface.class, jsonString);
Your_POJOInterface requestObject = bean.as();
4) That is all! Just plain GWT and no other special mappers!
GWT ships with Autobean framework. It allows you to serialize the json on client. But, we can use it on the server side too.
Here is the magic:
1) define the the interface which extends the AutoBeanFactory
2) on the server side use beanFactory = AutoBeanFactorySource.create(Your_Factory_class.class);
AutoBeanFactorySource is in com.google.web.bindery.autobean.vm.AutoBeanFactorySource package. One clue is that it is experimental. So check how it is working in your environment.
3) To decode json to POJO use:
AutoBean<Your_POJOInterface> bean = AutoBeanCodex.decode(
beanFactory, Your_POJOInterface.class, jsonString);
Your_POJOInterface requestObject = bean.as();
4) That is all! Just plain GWT and no other special mappers!
четверг, 6 августа 2015 г.
Check GWT and SmartGWT versions in project
You have a GWT & SmartGWT project. How to check the version of this jar libs in project?
For the GWT version open the *.cache.* file and see the *$gwt_version* variable.
As for SmartGWT see the class com.smartgwt.client.Version class.
For the GWT version open the *.cache.* file and see the *$gwt_version* variable.
As for SmartGWT see the class com.smartgwt.client.Version class.
пятница, 31 июля 2015 г.
J2EE. Logging in application
Logging is an cross-cutting concern in the application.
It is tedious to write every time
To speed up one can copy this string from one class to another.
Not very expressive way. Well, it is more code to read.
Nowadays we have annotations, we have EJB, we have CDI.
We can improve the logging code in two ways:
It is tedious to write every time
Logger log = LogManager.getLogger(getClass());
To speed up one can copy this string from one class to another.
Not very expressive way. Well, it is more code to read.
Nowadays we have annotations, we have EJB, we have CDI.
We can improve the logging code in two ways:
- implement the logging as a cross-cutting concern with a CDI interceptor
- if we need more logging it would be much simpler to inject logger
четверг, 30 июля 2015 г.
Get info about user in WebSphere Application Server 8.5.5
Sometimes you should get info about user in WebSphere Application Server. WAS is connected to ldap. I have to find out the full name of a user. How to do it?
One can use the VMM - the subsystem for user management in WAS.
To compile the VMM code following jars have to be in the classpath:
<WAS_HOME>\plugins\com.ibm.ws.runtime.jar
<WAS_HOME>\plugins\com.ibm.ws.runtime.wim.base.jar
<WAS_HOME>\plugins\org.eclipse.emf.commonj.sdo.jar
<WAS_HOME>\lib\j2ee.jar
Here is the code to get the user name:
import java.util.List;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import com.ibm.websphere.wim.SchemaConstants;
import com.ibm.websphere.wim.Service;
import com.ibm.websphere.wim.client.LocalServiceProvider;
import com.ibm.websphere.wim.ras.WIMTraceHelper;
import com.ibm.websphere.wim.util.SDOHelper;
import commonj.sdo.DataObject;
class VMMRealm {
private static Logger log = LogManager.getLogger(VMMRealm.class);
// Virtual member manager service that is used to make API calls
static Service service = null;
static {
service = locateService();
}
@SuppressWarnings("unchecked")
public UserInfo getUserData(String login) throws UserStoreAccessException {
UserInfo userInfo = null;
DataObject root = null;
try {
root = SDOHelper.createRootDataObject();
DataObject searchCtrl = SDOHelper.createControlDataObject(root,
null, SchemaConstants.DO_SEARCH_CONTROL);
searchCtrl.getList(SchemaConstants.PROP_PROPERTIES).add("sn");
searchCtrl.getList(SchemaConstants.PROP_PROPERTIES).add("uid");
searchCtrl.getList(SchemaConstants.PROP_PROPERTIES).add("cn");
searchCtrl.getList(SchemaConstants.PROP_PROPERTIES).add(
"telephoneNumber");
searchCtrl.getList(SchemaConstants.PROP_PROPERTIES).add(
"createTimestamp");
searchCtrl.getList(SchemaConstants.PROP_PROPERTIES)
.add("givenName");
searchCtrl.getList(SchemaConstants.PROP_PROPERTIES).add("title");
searchCtrl.setString(SchemaConstants.PROP_SEARCH_EXPRESSION, String
.format("@xsi:type='PersonAccount' and uid='%s'", login));
log.trace(printDO(root));
root = service.search(root);
log.trace(printDO(root));
userInfo = new UserInfo();
convertDataObjectToUserInfo(root, userInfo);
} catch (Exception e) {
throw new UserStoreAccessException("Error getting user", e);
}
return userInfo;
}
/**
* Convert data object to user info
*
* @param root
* @param info
*/
private void convertDataObjectToUserInfo(DataObject root, UserInfo info) {
List entities = root.getList(SchemaConstants.DO_ENTITIES);
for (int i = 0; i < entities.size(); i++) {
DataObject ent = (DataObject) entities.get(i);
info.setCn(ent.getString("cn"));
info.setUid(ent.getString("uid"));
}
}
/**
* Loop through the entities in the DataObject and print its uniqueName
*
* @param root
* input DataObject
*/
public static void printIdentifiers(DataObject root) throws Exception {
// Get all entities in the DataObject
List entities = root.getList(SchemaConstants.DO_ENTITIES);
for (int i = 0; i < entities.size(); i++) {
DataObject ent = (DataObject) entities.get(i);
// Get the entity Identifier
DataObject id = ent.getDataObject(SchemaConstants.DO_IDENTIFIER);
if (id != null) {
String uniqueName = id
.getString(SchemaConstants.PROP_UNIQUE_NAME);
log.debug("UniqueName is -> " + uniqueName);
} else {
log.debug("Missing Identifier");
}
}
}
/**
* Locates virtual member manager service in local JVM
**/
private static Service locateService() {
try {
// Local access virtual member manager Service
return new LocalServiceProvider(null);
} catch (Exception e) {
log.error(e.getMessage(), e);
}
return null;
}
public static String printDO(DataObject obj) {
return WIMTraceHelper.printDataObject(obj);
}
}
To get info about the user call getUserData method.
One can use the VMM - the subsystem for user management in WAS.
To compile the VMM code following jars have to be in the classpath:
<WAS_HOME>\plugins\com.ibm.ws.runtime.jar
<WAS_HOME>\plugins\com.ibm.ws.runtime.wim.base.jar
<WAS_HOME>\plugins\org.eclipse.emf.commonj.sdo.jar
<WAS_HOME>\lib\j2ee.jar
Here is the code to get the user name:
import java.util.List;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import com.ibm.websphere.wim.SchemaConstants;
import com.ibm.websphere.wim.Service;
import com.ibm.websphere.wim.client.LocalServiceProvider;
import com.ibm.websphere.wim.ras.WIMTraceHelper;
import com.ibm.websphere.wim.util.SDOHelper;
import commonj.sdo.DataObject;
class VMMRealm {
private static Logger log = LogManager.getLogger(VMMRealm.class);
// Virtual member manager service that is used to make API calls
static Service service = null;
static {
service = locateService();
}
@SuppressWarnings("unchecked")
public UserInfo getUserData(String login) throws UserStoreAccessException {
UserInfo userInfo = null;
DataObject root = null;
try {
root = SDOHelper.createRootDataObject();
DataObject searchCtrl = SDOHelper.createControlDataObject(root,
null, SchemaConstants.DO_SEARCH_CONTROL);
searchCtrl.getList(SchemaConstants.PROP_PROPERTIES).add("sn");
searchCtrl.getList(SchemaConstants.PROP_PROPERTIES).add("uid");
searchCtrl.getList(SchemaConstants.PROP_PROPERTIES).add("cn");
searchCtrl.getList(SchemaConstants.PROP_PROPERTIES).add(
"telephoneNumber");
searchCtrl.getList(SchemaConstants.PROP_PROPERTIES).add(
"createTimestamp");
searchCtrl.getList(SchemaConstants.PROP_PROPERTIES)
.add("givenName");
searchCtrl.getList(SchemaConstants.PROP_PROPERTIES).add("title");
searchCtrl.setString(SchemaConstants.PROP_SEARCH_EXPRESSION, String
.format("@xsi:type='PersonAccount' and uid='%s'", login));
log.trace(printDO(root));
root = service.search(root);
log.trace(printDO(root));
userInfo = new UserInfo();
convertDataObjectToUserInfo(root, userInfo);
} catch (Exception e) {
throw new UserStoreAccessException("Error getting user", e);
}
return userInfo;
}
/**
* Convert data object to user info
*
* @param root
* @param info
*/
private void convertDataObjectToUserInfo(DataObject root, UserInfo info) {
List entities = root.getList(SchemaConstants.DO_ENTITIES);
for (int i = 0; i < entities.size(); i++) {
DataObject ent = (DataObject) entities.get(i);
info.setCn(ent.getString("cn"));
info.setUid(ent.getString("uid"));
}
}
/**
* Loop through the entities in the DataObject and print its uniqueName
*
* @param root
* input DataObject
*/
public static void printIdentifiers(DataObject root) throws Exception {
// Get all entities in the DataObject
List entities = root.getList(SchemaConstants.DO_ENTITIES);
for (int i = 0; i < entities.size(); i++) {
DataObject ent = (DataObject) entities.get(i);
// Get the entity Identifier
DataObject id = ent.getDataObject(SchemaConstants.DO_IDENTIFIER);
if (id != null) {
String uniqueName = id
.getString(SchemaConstants.PROP_UNIQUE_NAME);
log.debug("UniqueName is -> " + uniqueName);
} else {
log.debug("Missing Identifier");
}
}
}
/**
* Locates virtual member manager service in local JVM
**/
private static Service locateService() {
try {
// Local access virtual member manager Service
return new LocalServiceProvider(null);
} catch (Exception e) {
log.error(e.getMessage(), e);
}
return null;
}
public static String printDO(DataObject obj) {
return WIMTraceHelper.printDataObject(obj);
}
}
To get info about the user call getUserData method.
J2EE6. CDI qualifier convenient injection
When you have an interface reference in a class and this interface has more than 1 implementation you have to specify somehow the injected implementation.
CDI offers you the qualifiers mechanism (annotations) to resolve this issue. For example you can create one annotation for PROD version and another annotation for TEST version of the implementation. Two annotations for such a thing are too much.
We can simplify it.
Here is our qualifier:
@Qualifier
@Retention(RUNTIME)
@Target({ FIELD, TYPE, METHOD })
public @interface ConnectionFactory {
ConnectionFactoryType value();
}
Here is the example of ConnectionFactoryType:
public enum ConnectionFactoryType {
PROD, TEST
}
Here is how how the implementation of the CDI bean looks like:
@ConnectionFactory(ConnectionFactoryType.PROD)
@ApplicationScoped
public class ProdConnectionFactory implements IConnectionFactory {}
Here is how you inject the bean:
@Inject
@ConnectionFactory(ConnectionFactoryType.PROD)
private IConnectionFactory connectionFactory;
What we got here is:
1) we created one annotation for different connection implementations.
2) We can specify the implementation by specifying the enum value.
3) This is reusable and better then creating one annotation per implementation.
CDI offers you the qualifiers mechanism (annotations) to resolve this issue. For example you can create one annotation for PROD version and another annotation for TEST version of the implementation. Two annotations for such a thing are too much.
We can simplify it.
Here is our qualifier:
@Qualifier
@Retention(RUNTIME)
@Target({ FIELD, TYPE, METHOD })
public @interface ConnectionFactory {
ConnectionFactoryType value();
}
Here is the example of ConnectionFactoryType:
public enum ConnectionFactoryType {
PROD, TEST
}
Here is how how the implementation of the CDI bean looks like:
@ConnectionFactory(ConnectionFactoryType.PROD)
@ApplicationScoped
public class ProdConnectionFactory implements IConnectionFactory {}
Here is how you inject the bean:
@Inject
@ConnectionFactory(ConnectionFactoryType.PROD)
private IConnectionFactory connectionFactory;
What we got here is:
1) we created one annotation for different connection implementations.
2) We can specify the implementation by specifying the enum value.
3) This is reusable and better then creating one annotation per implementation.
понедельник, 27 июля 2015 г.
Scala packages implicit import
By default in Scala following packages are implicitly imported in every unit of compilation:
- java.lang._
- scala._
- scala.Predef._
четверг, 23 июля 2015 г.
Singleton pattern in java
Currently I'm interested in software patterns. I'm reading the classic book: "Design patterns. Elements of reusable object-oriented software".
One of the patterns described is singleton. This is a popular pattern in java. So, how could you develop it java?
class Singleton{
private static Singleton singleton;
private Singleton{}
public static synchronized Singleton getInstance() {
if (instance == null) instance = new Singleton();
return instance;
}
}
But, how about this?
public enum Singleton{
INSTANCE;
public void yourBusinessMethod(){}
}
A single-element enum type is the best way to implement a singleton. This is stated in "Effective Java".
What are the pros of this solution?
One of the patterns described is singleton. This is a popular pattern in java. So, how could you develop it java?
class Singleton{
private static Singleton singleton;
private Singleton{}
public static synchronized Singleton getInstance() {
if (instance == null) instance = new Singleton();
return instance;
}
}
But, how about this?
public enum Singleton{
INSTANCE;
public void yourBusinessMethod(){}
}
A single-element enum type is the best way to implement a singleton. This is stated in "Effective Java".
What are the pros of this solution?
- no serialization problems
- guarantee against multiple instantiation
- easy to read (less) code
пятница, 19 июня 2015 г.
Productivity works
Do you like to work on multiple tasks simultaneously? If you are a computer, than you can. But if you are a human, that is hardly possible without sacrificing the quality of your job.
Can you do more working on multiple tasks simultaneously?
How much time do you spend on context switching between tasks?
Do you like when phone call, colleague or IM interrupts you during you work activity?
As for me I like to work on jobs in serial one by one. This means I can dedicate all my attention to the job. If you do multiple things at a time, than you can't complete them all.
Whenever possible, avoid interruptions and avoid working on more than one task at the same time. More on this one can read in the post by Kathy Sierra Your brain on multitasking.
Can you do more working on multiple tasks simultaneously?
How much time do you spend on context switching between tasks?
Do you like when phone call, colleague or IM interrupts you during you work activity?
As for me I like to work on jobs in serial one by one. This means I can dedicate all my attention to the job. If you do multiple things at a time, than you can't complete them all.
Whenever possible, avoid interruptions and avoid working on more than one task at the same time. More on this one can read in the post by Kathy Sierra Your brain on multitasking.
четверг, 18 июня 2015 г.
WebSphere Portal 7 webdav theme resources
First, to upload theme static resources to portal use webdav url:
http://<host>:10039/wps/mycontenthandler/dav/themelist.
Second, to make modifications in uploaded theme static resources use the following url:
http://<host>:10039/wps/mycontenthandler/dav/fs-type1.
Maybe, this will save up some time.
http://<host>:10039/wps/mycontenthandler/dav/themelist.
Second, to make modifications in uploaded theme static resources use the following url:
http://<host>:10039/wps/mycontenthandler/dav/fs-type1.
Maybe, this will save up some time.
понедельник, 1 июня 2015 г.
DB2 export data tables from remote database
If you have IBM DB2 client and you have to move data from remote source database to local target database then you can use db2 move utility.
First, catalog the node:
1) db2 catalog tcpip node <node_name> remote <server_name> server <port_number>
2) db2 terminate
Second, catalog the database:
3) db2 catalog db <remote_db_name> as <local_alias_db_name> at node <node_name>
4) db2 terminate
Export data:
db2move <db_name> export -sn <schema_name> -aw -u <login> -p <password>
You can move schemas or given tables. See help for db2move.
Before importing tables data in existent tables you should turn off identity constraint else import process fail:
db2 alter table <table_name> alter column <column_name_with_identity> drop identity
Import data:
db2move <db_name> import -u <login> -p <password>
After import restore identity constraint:
db2 ALTER TABLE <table_name> ALTER COLUMN <column_name_with_identity> SET GENERATED AS IDENTITY ( START WITH <count_the_number_yourself> INCREMENT BY 1)
To uncatalog database and node:
db2 uncatalog database <alias-name>
db2 uncatalog node <node_name>
First, catalog the node:
1) db2 catalog tcpip node <node_name> remote <server_name> server <port_number>
2) db2 terminate
Second, catalog the database:
3) db2 catalog db <remote_db_name> as <local_alias_db_name> at node <node_name>
4) db2 terminate
Export data:
db2move <db_name> export -sn <schema_name> -aw -u <login> -p <password>
You can move schemas or given tables. See help for db2move.
Before importing tables data in existent tables you should turn off identity constraint else import process fail:
db2 alter table <table_name> alter column <column_name_with_identity> drop identity
Import data:
db2move <db_name> import -u <login> -p <password>
After import restore identity constraint:
db2 ALTER TABLE <table_name> ALTER COLUMN <column_name_with_identity> SET GENERATED AS IDENTITY ( START WITH <count_the_number_yourself> INCREMENT BY 1)
To uncatalog database and node:
db2 uncatalog database <alias-name>
db2 uncatalog node <node_name>
понедельник, 25 мая 2015 г.
AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis
Ever heard of antipatterns? Read the top rated book about antipatters. Think is it right thing to read? Read the review here. Well, you can save up some time and instead read the wiki page about them.
Recommendations for using E-mail at work
• Treat every e−mail as if it could be used as evidence in a court of law.
• Treat every e−mail as if it were going directly to your enemy.
This rules are simple and add safety to you.
• Treat every e−mail as if it were going directly to your enemy.
This rules are simple and add safety to you.
среда, 13 мая 2015 г.
Teach Yourself Programming in Ten Years
Do you know how much it take to learn programming? I don't know. Practice a lot, learn a lot, solve problems a lot.
Read an article by Peter Norvig about teaching yourself programming. This article is under cut for a historical reference and convenience. All rights for it belongs to Peter Norvig.
Read an article by Peter Norvig about teaching yourself programming. This article is under cut for a historical reference and convenience. All rights for it belongs to Peter Norvig.
Last read book: SQL Antipatterns: Avoiding the Pitfalls of Database Programming
Hello! Today I'm going to share my impressions about
SQL Antipatterns: Avoiding the Pitfalls of Database Programming.
SQL Antipatterns: Avoiding the Pitfalls of Database Programming.
вторник, 5 мая 2015 г.
Db2 LUW 10.5 JDBC driver properties
Db2 JDBC driver for LUW has a lot of properties. Description one can find on http://www-01.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.apdv.java.doc/src/tpc/imjcc_r0052607.html.
One useful property I was needed is "currentSchema" . It specifies the default schema name that is used to qualify unqualified database objects in dynamically prepared SQL statements. The value of this property sets the value in the CURRENT SCHEMA special register on the database server. The schema name is case-sensitive, and must be specified in uppercase characters.
In WebSphere Application Server this properties can be found in "Custom properties" section of the datasource.
One useful property I was needed is "currentSchema" . It specifies the default schema name that is used to qualify unqualified database objects in dynamically prepared SQL statements. The value of this property sets the value in the CURRENT SCHEMA special register on the database server. The schema name is case-sensitive, and must be specified in uppercase characters.
In WebSphere Application Server this properties can be found in "Custom properties" section of the datasource.
среда, 29 апреля 2015 г.
Use code generator for automation JDBC DAO creation
I'm tired of writing JDBC DAO for database access. It is tedious and repetitive work. Why can't you use JPA? Not every data model can be covered with JPA.
After some research I've found out the project Telosys tools. It allows you to generate the JDBC DAO.
What can it offer for free? It's open source :-).
It provides you an Eclipse plugin and templates. Templates can be found here.
Also it has a good documentation. About five minutes and you can find out how to use it.
What you get is a free time.
After some research I've found out the project Telosys tools. It allows you to generate the JDBC DAO.
What can it offer for free? It's open source :-).
It provides you an Eclipse plugin and templates. Templates can be found here.
Also it has a good documentation. About five minutes and you can find out how to use it.
What you get is a free time.
Convert XSD to sql with Altova XmlSpy 2011
Sometimes you got a task to parse xml and load it into database. The xml conforms to xml schema.
You have to create the sql for the xsd. How to do it?
XmlSpy has the ability to convert xsd into sql. Well, it claims to support DB2.
So, the surprise was that XmlSpy uses ODBC driver. That's a big surprise. Ok.
After after some processing XmlSpy figured out the sql, but... all attributes were VARGRAPHIC.
XmlSpy allows you to change the sql types, but it is a lot of work.
That is very strange behavior, because in XSD was already signed the standard XML schema types.
At least, I've got a start sql to work with...
You have to create the sql for the xsd. How to do it?
XmlSpy has the ability to convert xsd into sql. Well, it claims to support DB2.
So, the surprise was that XmlSpy uses ODBC driver. That's a big surprise. Ok.
After after some processing XmlSpy figured out the sql, but... all attributes were VARGRAPHIC.
XmlSpy allows you to change the sql types, but it is a lot of work.
That is very strange behavior, because in XSD was already signed the standard XML schema types.
At least, I've got a start sql to work with...
вторник, 28 апреля 2015 г.
Bugs in your code
"The software industry average is 15 to 50 bugs per 1,000 lines of code."
This is quotation from the book SQL Antipatterns: Avoiding the Pitfalls of Database Programming (p. 71).
I don't know how did they get the results, but info is rather curious.
Do you know how many bugs in you code?
This is quotation from the book SQL Antipatterns: Avoiding the Pitfalls of Database Programming (p. 71).
I don't know how did they get the results, but info is rather curious.
Do you know how many bugs in you code?
четверг, 23 апреля 2015 г.
Last read book: Mastering Apache Maven 3
Just have finished reading Mastering Apache Maven 3.
Although this is a step by step guide and contains a lot of examples this book is more a reference guide.
Although this is a step by step guide and contains a lot of examples this book is more a reference guide.
понедельник, 6 апреля 2015 г.
WebSphere Application Server 8.5 Jax-rs annotations support.
WebSphere Application Server 8.5 Jax-rs supports Jackson 1.
The Jackson library is included in the runtime environment of the product. You do not need to bundle any additional libraries.
The Jackson library is included in the runtime environment of the product. You do not need to bundle any additional libraries.
The following Jackson annotations
are supported and can be used to annotation POJOs:
org.codehaus.jackson.annotate.JsonAnySetter
org.codehaus.jackson.annotate.JsonAutoDetect
org.codehaus.jackson.annotate.JsonClass
org.codehaus.jackson.annotate.JsonContentClass
org.codehaus.jackson.annotate.JsonCreator
org.codehaus.jackson.annotate.JsonGetter
org.codehaus.jackson.annotate.JsonIgnore
org.codehaus.jackson.annotate.JsonIgnoreProperties
org.codehaus.jackson.annotate.JsonKeyClass
org.codehaus.jackson.annotate.JsonProperty
org.codehaus.jackson.annotate.JsonPropertyOrder
org.codehaus.jackson.annotate.JsonSetter
org.codehaus.jackson.annotate.JsonSubTypes
org.codehaus.jackson.annotate.JsonSubTypes.Type
org.codehaus.jackson.annotate.JsonTypeInfo
org.codehaus.jackson.annotate.JsonTypeName
org.codehaus.jackson.annotate.JsonValue
org.codehaus.jackson.annotate.JsonWriteNullProperties
org.codehaus.jackson.map.annotate.JsonCachable
org.codehaus.jackson.map.annotate.JsonDeserialize
org.codehaus.jackson.map.annotate.JsonSerialize
org.codehaus.jackson.map.annotate.JsonTypeIdResolver
org.codehaus.jackson.map.annotate.JsonTypeResolver
org.codehaus.jackson.map.annotate.JsonView
четверг, 12 марта 2015 г.
Backbone.js. Remove model from collection without a reference to collection.
What if you have a model in a collection and in a model view you need to remove model from the collection. You can call destroy method on a model, but this will trigger the ajax delete method to the server.
Well, you can trigger the destroy event on a model like this: this.model.trigger('destroy',this.model).
And you model will be deleted from the collection and no DELETE method will be sent to the server.
Well, you can trigger the destroy event on a model like this: this.model.trigger('destroy',this.model).
And you model will be deleted from the collection and no DELETE method will be sent to the server.
понедельник, 2 марта 2015 г.
Joel Spolsky. And on Diverse and Occasionally Related Matters That Will Prove of Interest to Software Developers, Designers, and Managers, and to Those Who, Whether by Good Fortune or Ill Luck, Work with Them in Some Capacity
The world of IT is not about programming. This is all about making money. To stay profitable companies should understand the clients needs. Who can understand the clients needs better: programmers or program managers? How the development process should be organized? How the company's internal process should be organized? Should managers be on the way of programmers and throw away all the written code and start from the ground?
Some answers to this questions can be found in the book Joel on Software: And on Diverse and Occasionally Related Matters That Will Prove of Interest to Software Developers, Designers, and Managers, and to Those Who, Whether by Good Fortune or Ill Luck, Work with Them in Some Capacity . Although is a bit outdated it is still actual. It is actual because it contains the description of people errors and relations between people in it-business.
Personally, here are some minds I like:
The desire for rewriting program from scratch is in inverse proportion to the experience of the programmers.
The most important ability of program manager - is to learn how to make programmers do what he needs by convincing them as this was their own idea.
Some answers to this questions can be found in the book Joel on Software: And on Diverse and Occasionally Related Matters That Will Prove of Interest to Software Developers, Designers, and Managers, and to Those Who, Whether by Good Fortune or Ill Luck, Work with Them in Some Capacity . Although is a bit outdated it is still actual. It is actual because it contains the description of people errors and relations between people in it-business.
Personally, here are some minds I like:
The desire for rewriting program from scratch is in inverse proportion to the experience of the programmers.
The most important ability of program manager - is to learn how to make programmers do what he needs by convincing them as this was their own idea.
понедельник, 23 февраля 2015 г.
Last read book: Functional JavaScript Introducing Functional Programming with Underscore.js
During reading the documentation on underscore.js I found out the link to the book Functional javascript. For a long time I had postponed the learning of this theme and now I can fill the gap in my knowledge of functional programming and javascript techniques.
This book has positive and negative reviews, but in my opinion this book is more of a practical approach. Can I recommend to to anyone? In my opinion, yes. Of course, you cannot master the functional programming only by this book. It gives you a lot of information and directions in where to find more information.
What I like about this book is a good bibliography and a reference to useful javascript libraries.
This book has positive and negative reviews, but in my opinion this book is more of a practical approach. Can I recommend to to anyone? In my opinion, yes. Of course, you cannot master the functional programming only by this book. It gives you a lot of information and directions in where to find more information.
What I like about this book is a good bibliography and a reference to useful javascript libraries.
воскресенье, 22 февраля 2015 г.
Unit testing EJB3
One thing I like EJB3 is the use of annotations. So, if you want to unit test your EJB you can use the annotations and reflective programming to mock the dependencies and inject them.
воскресенье, 15 февраля 2015 г.
Last read book: How Linux Works, 2nd Edition
Just have finished reading the book How Linux Works, 2nd Edition. I enjoyed reading it. It makes me better understand the current state of Linux and its ecosystem. I can recommend this book...
суббота, 7 февраля 2015 г.
Share files on Linux through one time HTTP server
It is easy to share files files on Linux. You have a lot of options, like scp, ftp. But I've found out a very simple way - create your simple http server.
If you want to expose the directory to the internet you can start the server in your directory like the following:
python -m SimpleHTTPServer
Open up your browser on http://127.0.0.1:8000 and this is it - your directory.
If you want to expose the directory to the internet you can start the server in your directory like the following:
python -m SimpleHTTPServer
Open up your browser on http://127.0.0.1:8000 and this is it - your directory.
суббота, 31 января 2015 г.
Linux network configuration managers
There are a lot of network configuration managers for Linux. Here are some of them:
- Network Manager
- OpenWRT netifd
- Android's ConnectivityManager service
- ConnMan
- wicd
пятница, 30 января 2015 г.
EAR and dependencies
On the new project our team decided to make a switch from spring to ejb3.
We have created the EAR project, web module and ejb module. We have faced with a problem of classpath. Here is a cheat sheet of organizing the EAR project:
We have created the EAR project, web module and ejb module. We have faced with a problem of classpath. Here is a cheat sheet of organizing the EAR project:
Подписаться на:
Сообщения (Atom)