пятница, 27 декабря 2019 г.

Documenting Web API

A long time ago when SOAP was ubiquitous, WSDL was the only way to document your API.

There were two approaches - code first approach, and contract first approach. With code first approach you first develop your service and then with the help of library wsdl got generated. When you use contract first approach you develop WSDL description of your service and then code the service.

Now everyone doing REST. To document the service one could use RAML or swagger annotations. I have not used RAML, but I have used swagger. With swagger one can also use one of two approaches. Using swagger annotations were like a breeze. Now a good news. Swagger has been bought by SmartBear and renamed to OpenAPI Specification.
Now instead of swagger annotations we can use OpenAPI annotations.
With maven swagger plugin one can generate the specification during compilation time. Also there are plugins so that specification can be generated and accessed at runtime.
Apart from that new tools started to apper to work with OpenAPI specifications: editors, validators, report generators and so on. You can check https://openapi.tools for a list of tools.

There is a good introductory book to OpenAPI specification: The Design of Web APIs by Arnaud Lauret.
I specifically recommend reading chapters 4 and 12. Chapter 4 tells you how to create specification, chapter 12 tells about producing reports from the specification.

вторник, 2 июля 2019 г.

12-factor application methodology

1. There should be a one-to-one association between a versioned codebase (for example,
an IT repository) and a deployed service. The same codebase is used for many
deployments.

2. Services should explicitly declare all dependencies, and should not rely on the presence
of system-level tools or libraries.

3. Configuration that varies between deployment environments should be stored in the
environment (specifically in environment variables).

4. All backing services are treated as attached resources, which are managed (attached and
detached) by the execution environment.

5. The delivery pipeline should have strictly separate stages: Build, release, and run.

6. Applications should be deployed as one or more stateless processes. Specifically,
transient processes must be stateless and share nothing. Persisted data should be stored
in an appropriate backing service.

7. Self-contained services should make themselves available to other services by listening
on a specified port.

8. Concurrency is achieved by scaling individual processes (horizontal scaling).

9. Processes must be disposable: Fast startup and graceful shutdown behaviors lead to a
more robust and resilient system.

10.All environments, from local development to production, should be as similar as possible.

11.Applications should produce logs as event streams (for example, writing to stdout and
stderr), and trust the execution environment to aggregate streams.

12.If admin tasks are needed, they should be kept in source control and packaged alongside
the application to ensure that it is run with the same environment as the application.

More at https://12factor.net/ru/.

The eight fallacies of distributed computing

Distributed computing is a concept with roots that stretch back decades. The eight fallacies of
distributed computing were drafted in 1994, and deserve a mention:

1. The network is reliable.
2. Latency is zero.
3. Bandwidth is infinite.
4. The network is secure.
5. Topology doesn’t change.
6. There is one administrator.
7. Transport cost is zero.
8. The network is homogeneous.

See Fallacies of Distributed Computing
Explained, available at: http://www.rgoarchitects.com/Files/fallacies.pdf

среда, 22 мая 2019 г.

Spring. @Component and Further Stereotype Annotations

For reference:

1.10.1. @Component and Further Stereotype Annotations


The @Repository annotation is a marker for any class that fulfills the role or stereotype of a
repository (also known as Data Access Object or DAO). Among the uses of this marker is the
automatic translation of exceptions, as described in Exception Translation.

Spring provides further stereotype annotations: @Component, @Service, and @Controller. @Component is a generic stereotype for any Spring-managed component. @Repository, @Service, and @Controller are specializations of @Component for more specific use cases (in the persistence, service, and presentation layers, respectively).

Therefore, you can annotate your component classes with @Component, but, by annotating them with @Repository, @Service, or @Controller instead, your classes are more properly suited for processing by tools or associating with aspects. For example, these stereotype annotations make ideal targets for pointcuts.

@Repository, @Service, and @Controller can also carry additional semantics in future releases of the Spring Framework. Thus, if you are choosing between using @Component or @Service for your service layer, @Service is clearly the better choice.

Similarly, as stated earlier, @Repository is already supported as a marker for automatic exception translation in your persistence layer.

среда, 3 апреля 2019 г.

UML diagrams for java programmers. Robert C Martin. [Extracts]

UML diagrams for java programmers. Robert C Martin.

When to draw diagrams, and when to stop.

Don’t make a rule that everything must be diagrammed. Such rules are worse than useless.
Enormous amounts of project time and energy can be wasted in pursuit of diagrams that
no one will ever read.

When to draw diagrams:
• Draw diagrams when several people need to understand the structure of a partic-
ular part of the design because they are all going to be working on it simulta-
neously. Stop when everyone agrees that they understand.
• Draw diagrams when two or more people disagree on how a particular element
should be designed, and you want team consensus. Put the discussion into a time-
box choose a means for deciding, like a vote, or an impartial judge. Stop at the
end of the timebox, or when the decision can be made. Then erase the diagram.
• Draw diagrams when you just want to play with a design idea, and the diagrams
can help you think it through. Stop when you’ve gotten to the point that you can
finish your thinking in code. Discard the diagrams.
• Draw diagrams when you need to explain the structure of some part of the code
to someone else, or to yourself. Stop when the explanation would be better done
by looking at code.
• Draw diagrams when it’s close to the end of the project and your customer has
requested them as part of a documentation stream for others.

When not to draw diagrams:
• Don’t draw diagrams because the process tells you to.
• Don’t draw diagrams because you feel guilty not drawing them or because you
think that’s what good designers do. Good designers write code and draw dia-
grams only when necessary.
• Don’t draw diagrams to create comprehensive documetation of the design phase
prior to coding. Such documents are almost never worth anything and consume
immense amounts of time.
• Don’t draw diagrams for other people to code. True software architects partici-
pate in the coding of their designs, so that they can lay in the bed they have made.

CASE Tools.

UML CASE tools can be beneficial, but they can also be expensive dust collectors. Be
very careful about making a decision to purchase and deploy a UML CASE tool.

• Don’t UML CASE tools make it easier to draw diagrams?
No, they make it significantly harder. There is a long learning curve to get profi-
cient; and even then the tools are more cumbersome than whiteboards. White-
boards are very easy to use. Developers are usually already familiar with them. If
not, there is virtually no learning curve.
• Don’t UML CASE tools make it easier for large teams to collaborate on dia-
grams?
In some cases. However, the vast majority of developer and development
projects do not need to be producing diagrams in such quantities and complexi-
ties that they require an automated collaborative system to coordinate their activ-
ities. In any case, the best time to purchase a system to coordinate the preparation
of UML diagrams is when a manual system has first been put in place, is starting
to show the strain, and there is no other choice but to automate.
• Don’t UML CASE tools make it easier to generate code?
The sum total effort involved in creating the diagrams, generating the code, and
then using the generated code is not likely to be less then the cost of just writing
the code in the first place. If there is a gain, it is not an order of magnitude, or
even a factor of two. Developers know how to edit text file and use IDEs. Gener-
ating code from diagrams may sound like a good idea; but I stronly urge you to
measure the productivity increase before you spend a lot of money.
• What about these CASE tools that are also IDEs and show the code and dia-
grams together?
These tools are definitely cool. However, I don’t think the constant presence of
UML is important. The fact that the diagram changes as I modify the code, or
that the code changes as I modify the diagram, does not really help me much.
Frankly, I’d rather buy an IDE that has put its effort on figuring out how to help
me manipulate my programs than my diagrams. Again, measure productivity
improvement before making a huge monetary committment.
In short, look before you leap, and look very hard. There may be a benefit to outfitting
your team with an expensive CASE tool; but verify that benefit with your own experi-
ments before buying something that could very well turn into shelfware.

But what about documentation?
Good documentation is essential to any project. Without it the team will get lost is a sea of
code. On the other hand, too much documentation of the wrong kind is worse; because
then you have all this distracting and misleading paper, and you still have the sea of code.

Documentation must be created, but it must be created prudently. Often the choice not
to document is just as important as the choice to document. A complex communication
protocol needs to be documented.

A complex relatoinal schema needs to be documented.
A complex reusable framework needs to be documented.

However, none of these things needs a hundred pages of UML. Software documenta-
tion should be short, and to the point. The value of a software document is inversely pro-
portional to its size.
For a project team of 12 people working on a project of a million lines of Java, I
would have a total of 25 to 200 pages of persistent documentation, with my preference
being for the smaller. These documents would include UML diagrams of the high level
structure of the important modules, ER diagrams of the relational schema, a page or two
about how to build the system, testing instructions, source code control instructions, etc.
I would put this documentation into a wiki, or some collaborative authoring tool so
that anyone on the team can have access to it on their screens and search it, and anyone
can change it as need be.
It takes a lot of work to make a document small, but that work is worth it. People will
read small documents. They won’t read 1,000 pages tomes.

Can code really be used to describe part of a system? In fact, this should be a goal of
the developers and designers. The team should strive to create code that is expressive and
readable. The more the code can describe itself, the fewer diagrams you will need, and the
better of the whole project will be.

In general, high level diagrams are more useful than low level ones.

One of the great fallacies of software development in the 1990s was the notion that
developers should draw sequence diagrams for all methods before writing the code. This
always proves to be a very expensive waste of time. Don’t do it.

Use Cases

The real trick to doing use cases is to keep them simple. Don’t worry about use case
forms, just write them on blank paper, or on a blank page in a simple word processor, or on
blank index cards. Don’t worry about filling in all the details. Details aren’t important
until much latter. Don’t worry about capturing all the use cases, that’s an impossible task
anyway.
The one thing to remember about use cases is: tomorrow they are going to change. No
matter how dilligently you capture them, no matter how fastidiously you record the
details, no matter how thoroughly you think them through, no matter how much effort you
apply to exploring and analyzing the requirements, tomorrow they are going to change.
If something is going to change tomorrow, you don’t really need to capture its details
today. Indeed, you want to postpone the capture of the details until the very last possible
moment.
Think of use cases as: Just In Time Requirements.

Writing Use Cases

Notice the title of this section. We write use cases, we don’t draw them. Use cases are not
diagrams. Use cases are textual descriptions of behavioral requirements; written from a
certain point of view.

What is a use case.
A use case is a description of the behavior of a system. That description is written from the
point of view of a user who has just told the system to do something particular. A use case
captures the visible sequence of events that a system goes through in response to a single
user stimulus.
A visible event is an event that the user can see. Use cases do not describe hidden
behavior at all. They don’t discuss the hidden mechanisms of the system. They only
describe those things that a user can see.

How can you estimate a use case if you don’t record it’s detail? You talk to the stake-
holders about the detail, without necessarily recording it. This will give you the informa-
tion you need to give a rough estimate. Why not record the detail if we’re going to talk to
the stakeholders about it? Because tomorrow the details are going to change.
 Yes, but over many use cases those effects integrate out.
Recording the detail too early just isn’t cost effective.

If we aren’t going to record the details of the use case just yet, then what do we
record? How do we know that the use case even exists if we don’t write something down?
Write the name of the use case. Keep a list of them in a spreadsheet, or a word processor
document. Better yet, write the name of the use case on an index card and maintain a stack
of use case cards. Fill in the details as they get closer to implementation.

What else?
What about actors, secondary actors, preconditions, postconditions, etc. etc. What about
all that stuff?
Don’t worry about it. For the vast majority of the systems you will work on, you
won’t need to know about all those other things. Should the time come that you need to
know more about use cases, then you can read Alistair Cockburn’s definitive work on the
topic: Writing Effective Use Cases, Addison Wesley, 2001. For now, learn to walk before
you learn to run. Get used to writing simple use cases as above. As you master them
(defined as having successfully used them in a project), you can ever so carefully and par-
simonously adopt some of the more sophisticated techniques. But remember, don’t sit and
spin.


Of all the diagrams in UML, use case diagrams are the most confusing, and the least use-
ful. With the exception of the System Boundary Diagram, which I’ll describe in a minute,
I recommend that you avoid them entirely.

This diagram is almost, but not quite, useless. It contains very little information of use
to the Java programmier, but it makes a good cover page for a presentation to stakehold-
ers.


Design Quality

What does it mean to be well designed? A system that is well designed is easy to under-
stand, easy to change, and easy to reuse. It presents no particular development difficulties,
is simple, terse, and economical. It is a pleasure to work with. Conversely, a bad design
stinks like rotting meat.

Dependency Management
Many of these smells are a result of mismanaged dependencies. Mismanaged dependen-
cies conjure the view of code that is a tangled mass of couplings. Indeed, it is this view of
entanglement that was the origin of the term “spaghetti code”.
Object oriented languages provide tools that aid in managing dependencies. Interfaces
can be created that break or invert the direction of certain dependencies. Polymorphism
allows modules to invoke functions without depending upon the modules that contain
them. Indeed, an OOPL gives us lots of power to shape the dependencies the way we
want.
So, how do we want them shaped? That’s where the following principles come in. I
have written a great deal about these principles. The definitive (and most long-winded)
treatment is [Martin2002]. There are also quite a number of papers describing these princi-
ples on www.objectmentor.com . What follows is a very brief summary.

So, five simple principles:
1. SRP -- A class should have one and only one reason to change.
2. OCP -- It should be possible to change the environment of a class without chang-
ing the class.
3. LSP -- Avoid making methods of derivatives illegal or degenerate. Users of base
classes should not need to know about the derivatives.
4. DIP -- Depend on interfaces and abstract classes instead of volatile concrete
classes.
5. ISP -- Give each user of an object an interface that has just the methods that user
needs.
When should these principles be applied? At the first hint of pain. It is not wise to try
to make all systems conform to all principles all the time, every time. You’ll spend an eter-
nity trying to imagine all the different environments to apply to the OCP, or all the differ-
ent sources of change to apply to the SRP. You’ll cook up dozens or hundreds of little
interfaces for the ISP, and create lots of worthless abstractions for the DIP.
The best way to apply these principle is reactively as opposed to proactively. When
you first detect that there is a structural problem with the code, or when you first realize
that a module is being impacted by changes in another, then you should see whether one or
more of these principles can be brought to bear to address the problem.
Of course if you take a reactive approach to applying the principles, then you also
need to take a proactive approach to putting the kinds of pressure on the system that will
create pain early. If you are going to react to pain, then you need to diligently find the sore
spots.
One of the best ways to hunt for sore spots is to write lots and lots of unit tests. It
works even better if you write the tests first, before you write the code that passes them.
But that’s a topic for the next chapter.

Stories that are too long should be split. Stories that are too short should be merged. A
story should never be longer than three or four days worth of effort for the whole team.
They should never be shorter than about half a day’s effort. Stories that are too short tend
to be over-estimated. Stories that are too long tend to be under-estimated. So we merge
and join stories until they sit near the sweet spot of accurate estimation.

When designers create diagrams without methods they may be partitioning the soft-
ware on something other than behavior. Partitionings that are not based upon behavior are
almost always significant errors. It is the behavior of a system that is the first clue to how
the software should be partitioned.

One of the goals of OOD is the partitioning and distribution of behavior into many classes and
 many functions. It turns out, however, that many object models that appear to be distributed
 are really the abode of gods in disguise.

The lesson here is simply this: Associations are the pathways through which mes-
sages are sent between objects.

At no point did I need or want a UML diagram to help me with that design.
Nor do I think that the use of UML diagrams would have made the development more efficient or resulted in a superior design.


суббота, 2 марта 2019 г.

Ubuntu 18.04 and CiscoAnyconnect

After installing Cisco Anyconnect I couldn't start it. That is really really sad.

In order to fix it I had to install:
sudo apt install libpangox-1.0-0

Now I can run ./vpnui and connect safely to my work PC.