站内搜索: 请输入搜索关键词
当前页面: 在线文档首页 > Spring Framework reference 2.1.0 参考手册英文版

The Spring Framework - Reference Documentation - Spring Framework reference 2.1.0 参考手册英文版

The Spring Framework - Reference Documentation

Authors

Rod Johnson, Juergen Hoeller, Alef Arendsen, Colin Sampaleanu, Rob Harrop, Thomas Risberg, Darren Davison, Dmitriy Kopylenko, Mark Pollack, Thierry Templier, Erwin Vervaet, Portia Tung, Ben Hale, Adrian Colyer, John Lewis, Costin Leau, Rick Evans

2.1 M1

Copies of this document may be made for your own use and for distribution to others, provided that you do not charge any fee for such copies and further provided that each copy contains this Copyright Notice, whether distributed in print or electronically.

Preface
1. Introduction
1.1. Overview
1.2. Usage scenarios
2. What's new in Spring 2.0?
2.1. Introduction
2.2. The Inversion of Control (IoC) container
2.2.1. Easier XML configuration
2.2.2. New bean scopes
2.2.3. Extensible XML authoring
2.3. Aspect Oriented Programming (AOP)
2.3.1. Easier AOP XML configuration
2.3.2. Support for @AspectJ aspects
2.4. The Middle Tier
2.4.1. Easier configuration of declarative transactions in XML
2.4.2. JPA
2.4.3. Asynchronous JMS
2.4.4. JDBC
2.5. The Web Tier
2.5.1. A form tag library for Spring MVC
2.5.2. Sensible defaulting in Spring MVC
2.5.3. Portlet framework
2.6. Everything else
2.6.1. Dynamic language support
2.6.2. JMX
2.6.3. Task scheduling
2.6.4. Java 5 (Tiger) support
2.7. Migrating to Spring 2.0
2.7.1. Changes
2.7.1.1. Jar packaging
2.7.1.2. XML configuration
2.7.1.3. Deprecated classes and methods
2.7.1.4. Apache OJB
2.7.1.5. iBatis
2.7.1.6. UrlFilenameViewController
2.8. Updated sample applications
2.9. Improved documentation
I. Core Technologies
3. The IoC container
3.1. Introduction
3.2. Basics - containers and beans
3.2.1. The container
3.2.1.1. Configuration metadata
3.2.2. Instantiating a container
3.2.2.1. Composing XML-based configuration metadata
3.2.3. The beans
3.2.3.1. Naming beans
3.2.3.2. Instantiating beans
3.2.4. Using the container
3.3. Dependencies
3.3.1. Injecting dependencies
3.3.1.1. Setter Injection
3.3.1.2. Constructor Injection
3.3.1.3. Some examples
3.3.2. Constructor Argument Resolution
3.3.2.1. Constructor Argument Type Matching
3.3.2.2. Constructor Argument Index
3.3.3. Bean properties and constructor arguments detailed
3.3.3.1. Straight values (primitives, Strings, etc.)
3.3.3.2. References to other beans (collaborators)
3.3.3.3. Inner beans
3.3.3.4. Collections
3.3.3.5. Nulls
3.3.3.6. Shortcuts and other convenience options for XML-based configuration metadata
3.3.3.7. Compound property names
3.3.4. Using depends-on
3.3.5. Lazily-instantiated beans
3.3.6. Autowiring collaborators
3.3.6.1. Excluding a bean from being available for autowiring
3.3.7. Checking for dependencies
3.3.8. Method Injection
3.3.8.1. Lookup method injection
3.3.8.2. Arbitrary method replacement
3.4. Bean scopes
3.4.1. The singleton scope
3.4.2. The prototype scope
3.4.3. The other scopes
3.4.3.1. Initial web configuration
3.4.3.2. The request scope
3.4.3.3. The session scope
3.4.3.4. The global session scope
3.4.3.5. Scoped beans as dependencies
3.4.4. Custom scopes
3.4.4.1. Creating your own custom scope
3.4.4.2. Using a custom scope
3.5. Customizing the nature of a bean
3.5.1. Lifecycle interfaces
3.5.1.1. Initialization callbacks
3.5.1.2. Destruction callbacks
3.5.2. Knowing who you are
3.5.2.1. BeanFactoryAware
3.5.2.2. BeanNameAware
3.6. Bean definition inheritance
3.7. Container extension points
3.7.1. Customizing beans using BeanPostProcessors
3.7.1.1. Example: Hello World, BeanPostProcessor-style
3.7.1.2. Example: The RequiredAnnotationBeanPostProcessor
3.7.2. Customizing configuration metadata with BeanFactoryPostProcessors
3.7.2.1. Example: the PropertyPlaceholderConfigurer
3.7.2.2. Example: the PropertyOverrideConfigurer
3.7.3. Customizing instantiation logic using FactoryBeans
3.8. The ApplicationContext
3.8.1. Internationalization using MessageSources
3.8.2. Events
3.8.3. Convenient access to low-level resources
3.8.4. Convenient ApplicationContext instantiation for web applications
3.9. Glue code and the evil singleton
3.9.1. Using the Singleton-helper classes
4. Resources
4.1. Introduction
4.2. The Resource interface
4.3. Built-in Resource implementations
4.3.1. UrlResource
4.3.2. ClassPathResource
4.3.3. FileSystemResource
4.3.4. ServletContextResource
4.3.5. InputStreamResource
4.3.6. ByteArrayResource
4.4. The ResourceLoader
4.5. The ResourceLoaderAware interface
4.6. Resources as dependencies
4.7. Application contexts and Resource paths
4.7.1. Constructing application contexts
4.7.1.1. Constructing ClassPathXmlApplicationContext instances - shortcuts
4.7.2. Wildcards in application context constructor resource paths
4.7.2.1. Ant-style Patterns
4.7.2.2. The classpath*: prefix
4.7.2.3. Other notes relating to wildcards
4.7.3. FileSystemResource caveats
5. Validation, Data-binding, the BeanWrapper, and PropertyEditors
5.1. Introduction
5.2. Validation using Spring's Validator interface
5.3. Resolving codes to error messages
5.4. Bean manipulation and the BeanWrapper
5.4.1. Setting and getting basic and nested properties
5.4.2. Built-in PropertyEditor implementations
5.4.2.1. Registering additional custom PropertyEditors
6. Aspect Oriented Programming with Spring
6.1. Introduction
6.1.1. AOP concepts
6.1.2. Spring AOP capabilities and goals
6.1.3. AOP Proxies
6.2. @AspectJ support
6.2.1. Enabling @AspectJ Support
6.2.2. Declaring an aspect
6.2.3. Declaring a pointcut
6.2.3.1. Supported Pointcut Designators
6.2.3.2. Combining pointcut expressions
6.2.3.3. Sharing common pointcut definitions
6.2.3.4. Examples
6.2.4. Declaring advice
6.2.4.1. Before advice
6.2.4.2. After returning advice
6.2.4.3. After throwing advice
6.2.4.4. After (finally) advice
6.2.4.5. Around advice
6.2.4.6. Advice parameters
6.2.4.7. Advice ordering
6.2.5. Introductions
6.2.6. Aspect instantiation models
6.2.7. Example
6.3. Schema-based AOP support
6.3.1. Declaring an aspect
6.3.2. Declaring a pointcut
6.3.3. Declaring advice
6.3.3.1. Before advice
6.3.3.2. After returning advice
6.3.3.3. After throwing advice
6.3.3.4. After (finally) advice
6.3.3.5. Around advice
6.3.3.6. Advice parameters
6.3.3.7. Advice ordering
6.3.4. Introductions
6.3.5. Aspect instantiation models
6.3.6. Advisors
6.3.7. Example
6.4. Choosing which AOP declaration style to use
6.4.1. Spring AOP or full AspectJ?
6.4.2. @AspectJ or XML for Spring AOP?
6.5. Mixing aspect types
6.6. Proxying mechanisms
6.6.1. Understanding AOP proxies
6.7. Programmatic creation of @AspectJ Proxies
6.8. Using AspectJ with Spring applications
6.8.1. Using AspectJ to dependency inject domain objects with Spring
6.8.1.1. Unit testing @Configurable objects
6.8.1.2. Working with multiple application contexts
6.8.2. Other Spring aspects for AspectJ
6.8.3. Configuring AspectJ aspects using Spring IoC
6.8.4. Using AspectJ Load-time weaving (LTW) with Spring applications
6.9. Further Resources
7. Spring AOP APIs
7.1. Introduction
7.2. Pointcut API in Spring
7.2.1. Concepts
7.2.2. Operations on pointcuts
7.2.3. AspectJ expression pointcuts
7.2.4. Convenience pointcut implementations
7.2.4.1. Static pointcuts
7.2.4.2. Dynamic pointcuts
7.2.5. Pointcut superclasses
7.2.6. Custom pointcuts
7.3. Advice API in Spring
7.3.1. Advice lifecycles
7.3.2. Advice types in Spring
7.3.2.1. Interception around advice
7.3.2.2. Before advice
7.3.2.3. Throws advice
7.3.2.4. After Returning advice
7.3.2.5. Introduction advice
7.4. Advisor API in Spring
7.5. Using the ProxyFactoryBean to create AOP proxies
7.5.1. Basics
7.5.2. JavaBean properties
7.5.3. JDK- and CGLIB-based proxies
7.5.4. Proxying interfaces
7.5.5. Proxying classes
7.5.6. Using 'global' advisors
7.6. Concise proxy definitions
7.7. Creating AOP proxies programmatically with the ProxyFactory
7.8. Manipulating advised objects
7.9. Using the "autoproxy" facility
7.9.1. Autoproxy bean definitions
7.9.1.1. BeanNameAutoProxyCreator
7.9.1.2. DefaultAdvisorAutoProxyCreator
7.9.1.3. AbstractAdvisorAutoProxyCreator
7.9.2. Using metadata-driven auto-proxying
7.10. Using TargetSources
7.10.1. Hot swappable target sources
7.10.2. Pooling target sources
7.10.3. Prototype target sources
7.10.4. ThreadLocal target sources
7.11. Defining new Advice types
7.12. Further resources
8. Testing
8.1. Introduction
8.2. Unit testing
8.3. Integration testing
8.3.1. Context management and caching
8.3.2. Dependency Injection of test fixtures
8.3.2.1. Field level injection
8.3.3. Transaction management
8.3.4. Convenience variables
8.3.5. Java5+ specific support
8.3.5.1. Annotations
8.3.6. PetClinic example
8.4. Further Resources
II. Middle Tier Data Access
9. Transaction management
9.1. Introduction
9.2. Motivations
9.3. Key abstractions
9.4. Resource synchronization with transactions
9.4.1. High-level approach
9.4.2. Low-level approach
9.4.3. TransactionAwareDataSourceProxy
9.5. Declarative transaction management
9.5.1. Understanding the Spring Framework's declarative transaction implementation
9.5.2. A first example
9.5.3. Rolling back
9.5.4. Configuring different transactional semantics for different beans
9.5.5. <tx:advice/> settings
9.5.6. Using @Transactional
9.5.6.1. @Transactional settings
9.5.7. Advising transactional operations
9.5.8. Using @Transactional with AspectJ
9.6. Programmatic transaction management
9.6.1. Using the TransactionTemplate
9.6.1.1. Specifying transaction settings
9.6.2. Using the PlatformTransactionManager
9.7. Choosing between programmatic and declarative transaction management
9.8. Application server-specific integration
9.8.1. BEA WebLogic
9.8.2. IBM WebSphere
9.9. Solutions to common problems
9.9.1. Use of the wrong transaction manager for a specific DataSource
9.10. Further Resources
10. DAO support
10.1. Introduction
10.2. Consistent exception hierarchy
10.3. Consistent abstract classes for DAO support
11. Data access using JDBC
11.1. Introduction
11.1.1. The package hierarchy
11.2. Using the JDBC Core classes to control basic JDBC processing and error handling
11.2.1. JdbcTemplate
11.2.1.1. Examples
11.2.1.2. JdbcTemplate idioms (best practices)
11.2.2. NamedParameterJdbcTemplate
11.2.3. SimpleJdbcTemplate
11.2.4. DataSource
11.2.5. SQLExceptionTranslator
11.2.6. Executing statements
11.2.7. Running Queries
11.2.8. Updating the database
11.3. Controlling database connections
11.3.1. DataSourceUtils
11.3.2. SmartDataSource
11.3.3. AbstractDataSource
11.3.4. SingleConnectionDataSource
11.3.5. DriverManagerDataSource
11.3.6. TransactionAwareDataSourceProxy
11.3.7. DataSourceTransactionManager
11.4. Modeling JDBC operations as Java objects
11.4.1. SqlQuery
11.4.2. MappingSqlQuery
11.4.3. SqlUpdate
11.4.4. StoredProcedure
11.4.5. SqlFunction
12. Object Relational Mapping (ORM) data access
12.1. Introduction
12.2. Hibernate
12.2.1. Resource management
12.2.2. SessionFactory setup in a Spring container
12.2.3. The HibernateTemplate
12.2.4. Implementing Spring-based DAOs without callbacks
12.2.5. Implementing DAOs based on plain Hibernate3 API
12.2.6. Programmatic transaction demarcation
12.2.7. Declarative transaction demarcation
12.2.8. Transaction management strategies
12.2.9. Container resources versus local resources
12.2.10. Spurious application server warnings when using Hibernate
12.3. JDO
12.3.1. PersistenceManagerFactory setup
12.3.2. JdoTemplate and JdoDaoSupport
12.3.3. Implementing DAOs based on the plain JDO API
12.3.4. Transaction management
12.3.5. JdoDialect
12.4. Oracle TopLink
12.4.1. SessionFactory abstraction
12.4.2. TopLinkTemplate and TopLinkDaoSupport
12.4.3. Implementing DAOs based on plain TopLink API
12.4.4. Transaction management
12.5. iBATIS SQL Maps
12.5.1. Setting up the SqlMapClient
12.5.2. Using SqlMapClientTemplate and SqlMapClientDaoSupport
12.5.3. Implementing DAOs based on plain iBATIS API
12.6. JPA
12.6.1. JPA setup in a Spring environment
12.6.1.1. LocalEntityManagerFactoryBean
12.6.1.2. LocalContainerEntityManagerFactoryBean
12.6.1.3. Dealing with multiple persistence units
12.6.2. JpaTemplate and JpaDaoSupport
12.6.3. Implementing DAOs based on plain JPA
12.6.4. Exception Translation
12.7. Transaction Management
12.8. JpaDialect
III. The Web
13. Web MVC framework
13.1. Introduction
13.1.1. Pluggability of other MVC implementations
13.1.2. Features of Spring Web MVC
13.2. The DispatcherServlet
13.3. Controllers
13.3.1. AbstractController and WebContentGenerator
13.3.2. Other simple controllers
13.3.3. The MultiActionController
13.3.4. Command controllers
13.4. Handler mappings
13.4.1. BeanNameUrlHandlerMapping
13.4.2. SimpleUrlHandlerMapping
13.4.3. Intercepting requests - the HandlerInterceptor interface
13.5. Views and resolving them
13.5.1. Resolving views - the ViewResolver interface
13.5.2. Chaining ViewResolvers
13.5.3. Redirecting to views
13.5.3.1. RedirectView
13.5.3.2. The redirect: prefix
13.5.3.3. The forward: prefix
13.6. Using locales
13.6.1. AcceptHeaderLocaleResolver
13.6.2. CookieLocaleResolver
13.6.3. SessionLocaleResolver
13.6.4. LocaleChangeInterceptor
13.7. Using themes
13.7.1. Introduction
13.7.2. Defining themes
13.7.3. Theme resolvers
13.8. Spring's multipart (fileupload) support
13.8.1. Introduction
13.8.2. Using the MultipartResolver
13.8.3. Handling a file upload in a form
13.9. Using Spring's form tag library
13.9.1. Configuration
13.9.2. The form tag
13.9.3. The input tag
13.9.4. The checkbox tag
13.9.5. The radiobutton tag
13.9.6. The password tag
13.9.7. The select tag
13.9.8. The option tag
13.9.9. The options tag
13.9.10. The textarea tag
13.9.11. The hidden tag
13.9.12. The errors tag
13.10. Handling exceptions
13.11. Convention over configuration
13.11.1. The Controller - ControllerClassNameHandlerMapping
13.11.2. The Model - ModelMap (ModelAndView)
13.11.3. The View - RequestToViewNameTranslator
13.12. Further Resources
14. Integrating view technologies
14.1. Introduction
14.2. JSP & JSTL
14.2.1. View resolvers
14.2.2. 'Plain-old' JSPs versus JSTL
14.2.3. Additional tags facilitating development
14.3. Tiles
14.3.1. Dependencies
14.3.2. How to integrate Tiles
14.3.2.1. InternalResourceViewResolver
14.3.2.2. ResourceBundleViewResolver
14.4. Velocity & FreeMarker
14.4.1. Dependencies
14.4.2. Context configuration
14.4.3. Creating templates
14.4.4. Advanced configuration
14.4.4.1. velocity.properties
14.4.4.2. FreeMarker
14.4.5. Bind support and form handling
14.4.5.1. The bind macros
14.4.5.2. simple binding
14.4.5.3. form input generation macros
14.4.5.4. HTML escaping and XHTML compliance
14.5. XSLT
14.5.1. My First Words
14.5.1.1. Bean definitions
14.5.1.2. Standard MVC controller code
14.5.1.3. Convert the model data to XML
14.5.1.4. Defining the view properties
14.5.1.5. Document transformation
14.5.2. Summary
14.6. Document views (PDF/Excel)
14.6.1. Introduction
14.6.2. Configuration and setup
14.6.2.1. Document view definitions
14.6.2.2. Controller code
14.6.2.3. Subclassing for Excel views
14.6.2.4. Subclassing for PDF views
14.7. JasperReports
14.7.1. Dependencies
14.7.2. Configuration
14.7.2.1. Configuring the ViewResolver
14.7.2.2. Configuring the Views
14.7.2.3. About Report Files
14.7.2.4. Using JasperReportsMultiFormatView
14.7.3. Populating the ModelAndView
14.7.4. Working with Sub-Reports
14.7.4.1. Configuring Sub-Report Files
14.7.4.2. Configuring Sub-Report Data Sources
14.7.5. Configuring Exporter Parameters
15. Integrating with other web frameworks
15.1. Introduction
15.2. Common configuration
15.3. JavaServer Faces
15.3.1. DelegatingVariableResolver
15.3.2. FacesContextUtils
15.4. Struts
15.4.1. ContextLoaderPlugin
15.4.1.1. DelegatingRequestProcessor
15.4.1.2. DelegatingActionProxy
15.4.2. ActionSupport Classes
15.5. Tapestry
15.5.1. Injecting Spring-managed beans
15.5.1.1. Dependency Injecting Spring Beans into Tapestry pages
15.5.1.2. Component definition files
15.5.1.3. Adding abstract accessors
15.5.1.4. Dependency Injecting Spring Beans into Tapestry pages - Tapestry 4.0+ style
15.6. WebWork
15.7. Further Resources
16. Portlet MVC Framework
16.1. Introduction
16.1.1. Controllers - The C in MVC
16.1.2. Views - The V in MVC
16.1.3. Web-scoped beans
16.2. The DispatcherPortlet
16.3. The ViewRendererServlet
16.4. Controllers
16.4.1. AbstractController and PortletContentGenerator
16.4.2. Other simple controllers
16.4.3. Command Controllers
16.4.4. PortletWrappingController
16.5. Handler mappings
16.5.1. PortletModeHandlerMapping
16.5.2. ParameterHandlerMapping
16.5.3. PortletModeParameterHandlerMapping
16.5.4. Adding HandlerInterceptors
16.5.5. HandlerInterceptorAdapter
16.5.6. ParameterMappingInterceptor
16.6. Views and resolving them
16.7. Multipart (file upload) support
16.7.1. Using the PortletMultipartResolver
16.7.2. Handling a file upload in a form
16.8. Handling exceptions
16.9. Portlet application deployment
IV. Integration
17. Remoting and web services using Spring
17.1. Introduction
17.2. Exposing services using RMI
17.2.1. Exporting the service using the RmiServiceExporter
17.2.2. Linking in the service at the client
17.3. Using Hessian or Burlap to remotely call services via HTTP
17.3.1. Wiring up the DispatcherServlet for Hessian
17.3.2. Exposing your beans by using the HessianServiceExporter
17.3.3. Linking in the service on the client
17.3.4. Using Burlap
17.3.5. Applying HTTP basic authentication to a service exposed through Hessian or Burlap
17.4. Exposing services using HTTP invokers
17.4.1. Exposing the service object
17.4.2. Linking in the service at the client
17.5. Web services
17.5.1. Exposing services using JAX-RPC
17.5.2. Accessing web services
17.5.3. Register Bean Mappings
17.5.4. Registering our own Handler
17.5.5. Exposing web services using XFire
17.6. JMS
17.6.1. Server-side configuration
17.6.2. Client-side configuration
17.7. Auto-detection is not implemented for remote interfaces
17.8. Considerations when choosing a technology
18. Enterprise Java Bean (EJB) integration
18.1. Introduction
18.2. Accessing EJBs
18.2.1. Concepts
18.2.2. Accessing local SLSBs
18.2.3. Accessing remote SLSBs
18.3. Using Spring's convenience EJB implementation classes
19. JMS
19.1. Introduction
19.2. Using Spring JMS
19.2.1. JmsTemplate
19.2.2. Connections
19.2.3. Destination Management
19.2.4. Message Listener Containers
19.2.4.1. SimpleMessageListenerContainer
19.2.4.2. DefaultMessageListenerContainer
19.2.4.3. ServerSessionMessageListenerContainer
19.2.5. Transaction management
19.3. Sending a Message
19.3.1. Using Message Converters
19.3.2. SessionCallback and ProducerCallback
19.4. Receiving a message
19.4.1. Synchronous Reception
19.4.2. Asynchronous Reception - Message-Driven POJOs
19.4.3. The SessionAwareMessageListener interface
19.4.4. The MessageListenerAdapter
19.4.5. Participating in transactions
20. JMX
20.1. Introduction
20.2. Exporting your beans to JMX
20.2.1. Creating an MBeanServer
20.2.2. Reusing an existing MBeanServer
20.2.3. Lazy-initialized MBeans
20.2.4. Automatic registration of MBeans
20.2.5. Controlling the registration behavior
20.3. Controlling the management interface of your beans
20.3.1. The MBeanInfoAssembler Interface
20.3.2. Using source-Level metadata
20.3.3. Using JDK 5.0 Annotations
20.3.4. Source-Level Metadata Types
20.3.5. The AutodetectCapableMBeanInfoAssembler interface
20.3.6. Defining Management interfaces using Java interfaces
20.3.7. Using MethodNameBasedMBeanInfoAssembler
20.4. Controlling the ObjectNames for your beans
20.4.1. Reading ObjectNames from Properties
20.4.2. Using the MetadataNamingStrategy
20.5. JSR-160 Connectors
20.5.1. Server-side Connectors
20.5.2. Client-side Connectors
20.5.3. JMX over Burlap/Hessian/SOAP
20.6. Accessing MBeans via Proxies
20.7. Notifications
20.7.1. Registering Listeners for Notifications
20.7.2. Publishing Notifications
20.8. Further Resources
21. JCA CCI
21.1. Introduction
21.2. Configuring CCI
21.2.1. Connector configuration
21.2.2. ConnectionFactory configuration in Spring
21.2.3. Configuring CCI connections
21.2.4. Using a single CCI connection
21.3. Using Spring's CCI access support
21.3.1. Record conversion
21.3.2. The CciTemplate
21.3.3. DAO support
21.3.4. Automatic output record generation
21.3.5. Summary
21.3.6. Using a CCI Connection and Interaction directly
21.3.7. Example for CciTemplate usage
21.4. Modeling CCI access as operation objects
21.4.1. MappingRecordOperation
21.4.2. MappingCommAreaOperation
21.4.3. Automatic output record generation
21.4.4. Summary
21.4.5. Example for MappingRecordOperation usage
21.4.6. Example for MappingCommAreaOperation usage
21.5. Transactions
22. Email
22.1. Introduction
22.2. Usage
22.2.1. Basic MailSender and SimpleMailMessage usage
22.2.2. Using the JavaMailSender and the MimeMessagePreparator
22.3. Using the JavaMail MimeMessageHelper
22.3.1. Sending attachments and inline resources
22.3.1.1. Attachments
22.3.1.2. Inline resources
22.3.2. Creating email content using a templating library
22.3.2.1. A Velocity-based example
23. Scheduling and Thread Pooling
23.1. Introduction
23.2. Using the OpenSymphony Quartz Scheduler
23.2.1. Using the JobDetailBean
23.2.2. Using the MethodInvokingJobDetailFactoryBean
23.2.3. Wiring up jobs using triggers and the SchedulerFactoryBean
23.3. Using JDK Timer support
23.3.1. Creating custom timers
23.3.2. Using the MethodInvokingTimerTaskFactoryBean
23.3.3. Wrapping up: setting up the tasks using the TimerFactoryBean
23.4. The Spring TaskExecutor abstraction
23.4.1. TaskExecutor types
23.4.2. Using a TaskExecutor
24. Dynamic language support
24.1. Introduction
24.2. A first example
24.3. Defining beans that are backed by dynamic languages
24.3.1. Common concepts
24.3.1.1. The <lang:language/> element
24.3.1.2. Refreshable beans
24.3.1.3. Inline dynamic language source files
24.3.1.4. Understanding Constructor Injection in the context of dynamic-language-backed beans
24.3.2. JRuby beans
24.3.3. Groovy beans
24.3.3.1. Customising Groovy objects via a callback
24.3.4. BeanShell beans
24.4. Scenarios
24.4.1. Scripted Spring MVC Controllers
24.4.2. Scripted Validators
24.5. Bits and bobs
24.5.1. AOP - advising scripted beans
24.5.2. Scoping
24.6. Further Resources
25. Annotations and Source Level Metadata Support
25.1. Introduction
25.2. Spring's metadata support
25.3. Annotations
25.3.1. @Required
25.3.2. Other @Annotations in Spring
25.4. Integration with Jakarta Commons Attributes
25.5. Metadata and Spring AOP autoproxying
25.5.1. Fundamentals
25.5.2. Declarative transaction management
25.5.3. Pooling
25.5.4. Custom metadata
25.6. Using attributes to minimize MVC web tier configuration
V. Sample applications
26. Showcase applications
26.1. Introduction
26.2. Spring MVC Controllers implemented in a dynamic language
26.2.1. Build and deployment
26.3. Implementing DAOs using SimpleJdbcTemplate and @Repository
26.3.1. The domain
26.3.2. The data access objects
26.3.3. Build
A. XML Schema-based configuration
A.1. Introduction
A.2. XML Schema-based configuration
A.2.1. Referencing the schemas
A.2.2. The util schema
A.2.2.1. <util:constant/>
A.2.2.2. <util:property-path/>
A.2.2.3. <util:properties/>
A.2.2.4. <util:list/>
A.2.2.5. <util:map/>
A.2.2.6. <util:set/>
A.2.3. The jee schema
A.2.3.1. <jee:jndi-lookup/> (simple)
A.2.3.2. <jee:jndi-lookup/> (with single JNDI environment setting)
A.2.3.3. <jee:jndi-lookup/> (with multiple JNDI environment settings)
A.2.3.4. <jee:jndi-lookup/> (complex)
A.2.3.5. <jee:local-slsb/> (simple)
A.2.3.6. <jee:local-slsb/> (complex)
A.2.3.7. <jee:remote-slsb/>
A.2.4. The lang schema
A.2.5. The tx (transaction) schema
A.2.6. The aop schema
A.2.7. The tool schema
A.2.8. The beans schema
A.3. Setting up your IDE
A.3.1. Setting up Eclipse
A.3.2. Setting up IntelliJ IDEA
A.3.3. Integration issues
A.3.3.1. XML parsing errors in the Resin v.3 application server
B. Extensible XML authoring
B.1. Introduction
B.2. Authoring the schema
B.3. Coding a NamespaceHandler
B.4. Coding a BeanDefinitionParser
B.5. Registering the handler and the schema
B.5.1. 'META-INF/spring.handlers'
B.5.2. 'META-INF/spring.schemas'
B.6. Using a custom extension in your Spring XML configuration
B.7. Meatier examples
B.7.1. Nesting custom tags within custom tags
B.7.2. Custom attributes on 'normal' elements
B.8. Further Resources
C. spring-beans-2.0.dtd
D. spring.tld
D.1. Introduction
D.2. The bind tag
D.3. The escapeBody tag
D.4. The hasBindErrors tag
D.5. The htmlEscape tag
D.6. The message tag
D.7. The nestedPath tag
D.8. The theme tag
D.9. The transform tag
E. spring-form.tld
E.1. Introduction
E.2. The checkbox tag
E.3. The errors tag
E.4. The form tag
E.5. The hidden tag
E.6. The input tag
E.7. The label tag
E.8. The option tag
E.9. The options tag
E.10. The password tag
E.11. The radiobutton tag
E.12. The select tag
E.13. The textarea tag

Preface

Developing software applications is hard enough even with good tools and technologies. Implementing applications using platforms which promise everything but turn out to be heavy-weight, hard to control and not very efficient during the development cycle makes it even harder. Spring provides a light-weight solution for building enterprise-ready applications, while still supporting the possibility of using declarative transaction management, remote access to your logic using RMI or web services, and various options for persisting your data to a database. Spring provides a full-featured MVC framework, and transparent ways of integrating AOP into your software.

Spring could potentially be a one-stop-shop for all your enterprise applications; however, Spring is modular, allowing you to use just those parts of it that you need, without having to bring in the rest. You can use the IoC container, with Struts on top, but you could also choose to use just the Hibernate integration code or the JDBC abstraction layer. Spring has been (and continues to be) designed to be non-intrusive, meaning dependencies on the framework itself are generally none (or absolutely minimal, depending on the area of use).

This document provides a reference guide to Spring's features. Since this document is still to be considered very much work-in-progress, if you have any requests or comments, please post them on the user mailing list or on the support forums at http://forum.springframework.org/.

Before we go on, a few words of gratitude are due to Christian Bauer (of the Hibernate team), who prepared and adapted the DocBook-XSL software in order to be able to create Hibernate's reference guide, thus also allowing us to create this one. Also thanks to Russell Healy for doing an extensive and valuable review of some of the material.

Chapter 1. Introduction

Java applications (a loose term which runs the gamut from constrained applets to full-fledged n-tier server-side enterprise applications) typically are composed of a number of objects that collaborate with one another to form the application proper. The objects in an application can thus be said to have dependencies between themselves.

The Java language and platform provides a wealth of functionality for architecting and building applications, ranging all the way from the very basic building blocks of primitive types and classes (and the means to define new classes), to rich full-featured application servers and web frameworks. One area that is decidedly conspicuous by its absence is any means of taking the basic building blocks and composing them into a coherent whole; this area has typically been left to the purvey of the architects and developers tasked with building an application (or applications). Now to be fair, there are a number of design patterns devoted to the business of composing the various classes and object instances that makeup an all-singing, all-dancing application. Design patterns such as Factory, Abstract Factory, Builder, Decorator, and Service Locator (to name but a few) have widespread recognition and acceptance within the software development industry (presumably that is why these patterns have been formalized as patterns in the first place). This is all very well, but these patterns are just that: best practices given a name, typically together with a description of what the pattern does, where the pattern is typically best applied, the problems that the application of the pattern addresses, and so forth. Notice that the last paragraph used the phrase “... a description of what the pattern does...”; pattern books and wikis are typically listings of such formalized best practice that you can certainly take away, mull over, and then implement yourself in your application.

The IoC component of the Spring Framework addresses the enterprise concern of taking the classes, objects, and services that are to compose an application, by providing a formalized means of composing these various disparate components into a fully working application ready for use. The Spring Framework takes best practices that have been proven over the years in numerous applications and formalized as design patterns, and actually codifies these patterns as first class objects that you as an architect and developer can take away and integrate into your own application(s). This is a Very Good Thing Indeed as attested to by the numerous organizations and institutions that have used the Spring Framework to engineer robust, maintainable applications.

1.1. Overview

The Spring Framework contains a lot of features, which are well-organized in seven modules shown in the diagram below. This chapter discusses each of the modules in turn.

Overview of the Spring Framework

The Core package is the most fundamental part of the framework and provides the IoC and Dependency Injection features. The basic concept here is the BeanFactory, which provides a sophisticated implementation of the factory pattern which removes the need for programmatic singletons and allows you to decouple the configuration and specification of dependencies from your actual program logic.

The Context package build on the solid base provided by the Core package: it provides a way to access objects in a framework-style manner in a fashion somewhat reminiscent of a JNDI-registry. The context package inherits its features from the beans package and adds support for internationalization (I18N) (using for example resource bundles), event-propagation, resource-loading, and the transparent creation of contexts by, for example, a servlet container.

The DAO package provides a JDBC-abstraction layer that removes the need to do tedious JDBC coding and parsing of database-vendor specific error codes. Also, the JDBC package provides a way to do programmatic as well as declarative transaction management, not only for classes implementing special interfaces, but for all your POJOs (plain old Java objects).

The ORM package provides integration layers for popular object-relational mapping APIs, including JPA, JDO, Hibernate, and iBatis. Using the ORM package you can use all those O/R-mappers in combination with all the other features Spring offers, such as the simple declarative transaction management feature mentioned previously.

Spring's AOP package provides an AOP Alliance-compliant aspect-oriented programming implementation allowing you to define, for example, method-interceptors and pointcuts to cleanly decouple code implementing functionality that should logically speaking be separated. Using source-level metadata functionality you can also incorporate all kinds of behavioral information into your code, in a manner similar to that of .NET attributes.

Spring's Web package provides basic web-oriented integration features, such as multipart file-upload functionality, the initialization of the IoC container using servlet listeners and a web-oriented application context. When using Spring together with WebWork or Struts, this is the package to integrate with.

Spring's MVC package provides a Model-View-Controller (MVC) implementation for web-applications. Spring's MVC framework is not just any old implementation; it provides a clean separation between domain model code and web forms, and allows you to use all the other features of the Spring Framework.

1.2. Usage scenarios

With the building blocks described above you can use Spring in all sorts of scenarios, from applets up to fully-fledged enterprise applications using Spring's transaction management functionality and web framework integration.

Typical full-fledged Spring web application

By using Spring's declarative transaction management features the web application is fully transactional, just as it would be when using container managed transactions as provided by Enterprise JavaBeans. All your custom business logic can be implemented using simple POJOs, managed by Spring's IoC container. Additional services include support for sending email, and validation that is independent of the web layer enabling you to choose where to execute validation rules. Spring's ORM support is integrated with JPA, Hibernate, JDO and iBatis; for example, when using Hibernate, you can continue to use your existing mapping files and standard Hibernate SessionFactory configuration. Form controllers seamlessly integrate the web-layer with the domain model, removing the need for ActionForms or other classes that transform HTTP parameters to values for your domain model.

Spring middle-tier using a third-party web framework

Sometimes the current circumstances do not allow you to completely switch to a different framework. The Spring Framework does not force you to use everything within it; it is not an all-or-nothing solution. Existing front-ends built using WebWork, Struts, Tapestry, or other UI frameworks can be integrated perfectly well with a Spring-based middle-tier, allowing you to use the transaction features that Spring offers. The only thing you need to do is wire up your business logic using an ApplicationContext and integrate your web layer using a WebApplicationContext.

Remoting usage scenario

When you need to access existing code via web services, you can use Spring's Hessian-, Burlap-, Rmi- or JaxRpcProxyFactory classes. Enabling remote access to existing applications suddenly is not that hard anymore.

EJBs - Wrapping existing POJOs

The Spring Framework also provides an access- and abstraction- layer for Enterprise JavaBeans, enabling you to reuse your existing POJOs and wrap them in Stateless Session Beans, for use in scalable, failsafe web applications that might need declarative security.

Chapter 2. What's new in Spring 2.0?

2.1. Introduction

If you have been using the Spring Framework for some time, you will be aware that Spring has just undergone a major revision.

This revision includes a host of new features, and a lot of the existing functionality has been reviewed and improved. In fact, so much of Spring is shiny and improved that the Spring development team decided that the next release of Spring merited an increment of the version number; and so Spring 2.0 was announced in December 2005 at the Spring Experience conference in Florida.

This chapter is a guide to the new and improved features of Spring 2.0. It is intended to provide a high-level summary so that seasoned Spring architects and developers can become immediately familiar with the new Spring 2.0 functionality. For more in-depth information on the features, please refer to the corresponding sections hyperlinked from within this chapter.

Some of the new and improved functionality described below has been (or will be) backported into the Spring 1.2.x release line. Please do consult the changelogs for the 1.2.x releases to see if a feature has been backported.

2.2. The Inversion of Control (IoC) container

One of the areas that contains a considerable number of 2.0 improvements is Spring's IoC container.

2.2.1. Easier XML configuration

Spring XML configuration is now even easier, thanks to the advent of the new XML configuration syntax based on XML Schema. If you want to take advantage of the new tags that Spring provides (and the Spring team certainly suggest that you do because they make configuration less verbose and easier to read), then do read the section entitled Appendix A, XML Schema-based configuration.

On a related note, there is a new, updated DTD for Spring 2.0 that you may wish to reference if you cannot take advantage of the XML Schema-based configuration. The DOCTYPE declaration is included below for your convenience, but the interested reader should definitely read the 'spring-beans-2.0.dtd' DTD included in the 'dist/resources' directory of the Spring 2.0 distribution.

<!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN 2.0//EN"
		  "http://www.springframework.org/dtd/spring-beans-2.0.dtd">

2.2.2. New bean scopes

Previous versions of Spring had IoC container level support for exactly two distinct bean scopes (singleton and prototype). Spring 2.0 improves on this by not only providing a number of additional scopes depending on the environment in which Spring is being deployed (for example, request and session scoped beans in a web environment), but also by providing integration points so that Spring users can create their own scopes.

It should be noted that although the underlying (and internal) implementation for singleton- and prototype-scoped beans has been changed, this change is totally transparent to the end user... no existing configuration needs to change, and no existing configuration will break.

Both the new and the original scopes are detailed in the section entitled Section 3.4, “Bean scopes”.

2.2.3. Extensible XML authoring

Not only is XML configuration easier to write, it is now also extensible.

What 'extensible' means in this context is that you, as an application developer, or (more likely) as a third party framework or product vendor, can write custom tags that other developers can then plug into their own Spring configuration files. This allows you to have your own domain specific language (the term is used loosely here) of sorts be reflected in the specific configuration of your own components.

Implementing custom Spring tags may not be of interest to every single application developer or enterprise architect using Spring in their own projects. We expect third-party vendors to be highly interested in developing custom configuration tags for use in Spring configuration files.

The extensible configuration mechanism is documented in Appendix B, Extensible XML authoring.

2.3. Aspect Oriented Programming (AOP)

Spring 2.0 has a much improved AOP offering. The Spring AOP framework itself is markedly easier to configure in XML, and significantly less verbose as a result; and Spring 2.0 integrates with the AspectJ pointcut language and @AspectJ aspect declaration style. The chapter entitled Chapter 6, Aspect Oriented Programming with Spring is dedicated to describing this new support.

2.3.1. Easier AOP XML configuration

Spring 2.0 introduces new schema support for defining aspects backed by regular Java objects. This support takes advantage of the AspectJ pointcut language and offers fully typed advice (i.e. no more casting and Object[] argument manipulation). Details of this support can be found in the section entitled Section 6.3, “Schema-based AOP support”.

2.3.2. Support for @AspectJ aspects

Spring 2.0 also supports aspects defined using the @AspectJ annotations. These aspects can be shared between AspectJ and Spring AOP, and require (honestly!) only some simple configuration. Said support for @AspectJ aspects is discussed in Section 6.2, “@AspectJ support”.

2.4. The Middle Tier

2.4.1. Easier configuration of declarative transactions in XML

The way that transactions are configured in Spring 2.0 has been changed significantly. The previous 1.2.x style of configuration continues to be valid (and supported), but the new style is markedly less verbose and is the recommended style. Spring 2.0 also ships with an AspectJ aspects library that you can use to make pretty much any object transactional - even objects not created by the Spring IoC container.

The chapter entitled Chapter 9, Transaction management contains all of the details.

2.4.2. JPA

Spring 2.0 ships with a JPA abstraction layer that is similar in intent to Spring's JDBC abstraction layer in terms of scope and general usage patterns.

If you are interested in using a JPA-implementation as the backbone of your persistence layer, the section entitled Section 12.6, “JPA” is dedicated to detailing Spring's support and value-add in this area.

2.4.3. Asynchronous JMS

Prior to Spring 2.0, Spring's JMS offering was limited to sending messages and the synchronous receiving of messages. This functionality (encapsulated in the JmsTemplate class) is great, but it doesn't address the requirement for the asynchronous receiving of messages.

Spring 2.0 now ships with full support for the reception of messages in an asynchronous fashion, as detailed in the section entitled Section 19.4.2, “Asynchronous Reception - Message-Driven POJOs”.

2.4.4. JDBC

There are some small (but nevertheless notable) new classes in the Spring Framework's JDBC support library. The first, NamedParameterJdbcTemplate, provides support for programming JDBC statements using named parameters (as opposed to programming JDBC statements using only classic placeholder ('?') arguments.

Another of the new classes, the SimpleJdbcTemplate, is aimed at making using the JdbcTemplate even easier to use when you are developing against Java 5+ (Tiger).

2.5. The Web Tier

The web tier support has been substantially improved and expanded in Spring 2.0.

2.5.1. A form tag library for Spring MVC

A rich JSP tag library for Spring MVC was the JIRA issue that garnered the most votes from Spring users (by a wide margin).

Spring 2.0 ships with a full featured JSP tag library that makes the job of authoring JSP pages much easier when using Spring MVC; the Spring team is confident it will satisfy all of those developers who voted for the issue on JIRA. The new tag library is itself covered in the section entitled Section 13.9, “Using Spring's form tag library”, and a quick reference to all of the new tags can be found in the appendix entitled Appendix E, spring-form.tld.

2.5.2. Sensible defaulting in Spring MVC

For a lot of projects, sticking to established conventions and having reasonable defaults is just what the projects need... this theme of convention-over-configuration now has explicit support in Spring MVC. What this means is that if you establish a set of naming conventions for your Controllers and views, you can substantially cut down on the amount of XML configuration that is required to setup handler mappings, view resolvers, ModelAndView instances, etc. This is a great boon with regards to rapid prototyping, and can also lend a degree of (always good-to-have) consistency across a codebase.

Spring MVC's convention-over-configuration support is detailed in the section entitled Section 13.11, “Convention over configuration”

2.5.3. Portlet framework

Spring 2.0 ships with a Portlet framework that is conceptually similar to the Spring MVC framework. Detailed coverage of the Spring Portlet framework can be found in the section entitled Chapter 16, Portlet MVC Framework.

2.6. Everything else

This final section outlines all of the other new and improved Spring 2.0 features and functionality.

2.6.1. Dynamic language support

Spring 2.0 now has support for beans written in languages other than Java, with the currently supported dynamic languages being JRuby, Groovy and BeanShell. This dynamic language support is comprehensively detailed in the section entitled Chapter 24, Dynamic language support.

2.6.2. JMX

The Spring Framework now has support for Notifications; it is also possible to exercise declarative control over the registration behavior of MBeans with an MBeanServer.

2.6.3. Task scheduling

Spring 2.0 offers an abstraction around the scheduling of tasks. For the interested developer, the section entitled Section 23.4, “The Spring TaskExecutor abstraction” contains all of the details.

2.7. Migrating to Spring 2.0

This final section details issues that may arise during any migration from Spring 1.2.x to Spring 2.0. Feel free to take this next statement with a pinch of salt, but upgrading to Spring 2.0 from a Spring 1.2 application should simply be a matter of dropping the Spring 2.0 jar into the appropriate location in your application's directory structure.

The keyword from the last sentence was of course the “should”. Whether the upgrade is seamless or not depends on how much of the Spring APIs you are using in your code. Spring 2.0 removed pretty much all of the classes and methods previously marked as deprecated in the Spring 1.2.x codebase, so if you have been using such classes and methods, you will of course have to use alternative classes and methods (some of which are summarized below).

With regards to configuration, Spring 1.2.x style XML configuration is 100%, satisfaction-guaranteed compatible with the Spring 2.0 library. Of course if you are still using the Spring 1.2.x DTD, then you won't be able to take advantage of some of the new Spring 2.0 functionality (such as scopes and easier AOP and transaction configuration), but nothing will blow up.

The suggested migration strategy is to drop in the Spring 2.0 jar(s) to benefit from the improved code present in the release (bug fixes, optimizations, etc.). You can then, on an incremental basis, choose to start using the new Spring 2.0 features and configuration. For example, you could choose to start configuring just your aspects in the new Spring 2.0 style; it is perfectly valid to have 90% of your configuration using the old-school Spring 1.2.x configuration (which references the 1.2.x DTD), and have the other 10% using the new Spring 2.0 configuration (which references the 2.0 DTD or XSD). Bear in mind that you are not forced to upgrade your XML configuration should you choose to drop in the Spring 2.0 libraries.

2.7.1. Changes

For a comprehensive list of changes, consult the 'changelog.txt' file that is located in the top level directory of the Spring Framework 2.0 distribution.

2.7.1.1. Jar packaging

The packaging of the Spring Framework jars has changed quite substantially between the 1.2.x and 2.0 releases. In particular, there are now dedicated jars for the JDO, Hibernate 2/3, TopLink ORM integration classes: they are no longer bundled in the core 'spring.jar' file anymore.

2.7.1.2. XML configuration

Spring 2.0 ships with XSDs that describe Spring's XML metadata format in a much richer fashion than the DTD that shipped with previous versions. The old DTD is still fully supported, but if possible you are encouraged to reference the XSD files at the top of your bean definition files.

One thing that has changed in a (somewhat) breaking fashion is the way that bean scopes are defined. If you are using the Spring 1.2 DTD you can continue to use the 'singleton' attribute. You can however choose to reference the new Spring 2.0 DTD which does not permit the use of the 'singleton' attribute, but rather uses the 'scope' attribute to define the bean lifecycle scope.

2.7.1.3. Deprecated classes and methods

A number of classes and methods that previously were marked as @deprecated have been removed from the Spring 2.0 codebase. The Spring team decided that the 2.0 release marked a fresh start of sorts, and that any deprecated 'cruft' was better excised now instead of continuing to haunt the codebase for the foreseeable future.

As mentioned previously, for a comprehensive list of changes, consult the 'changelog.txt' file that is located in the top level directory of the Spring Framework 2.0 distribution.

The following classes/interfaces have been removed from the Spring 2.0 codebase:

  • ResultReader : Use the RowMapper interface instead.

  • BeanFactoryBootstrap : Consider using a BeanFactoryLocator or a custom bootstrap class instead.

2.7.1.4. Apache OJB

Please note that support for Apache OJB was totally removed from the main Spring source tree; the Apache OJB integration library is still available, but can be found in it's new home in the Spring Modules project.

2.7.1.5. iBatis

Please note that support for iBATIS SQL Maps 1.3 has been removed. If you haven't done so already, upgrade to iBATIS SQL Maps 2.0/2.1.

2.7.1.6. UrlFilenameViewController

The view name that is determined by the UrlFilenameViewController now takes into account the nested path of the request. This is a breaking change from the original contract of the UrlFilenameViewController, and means that if you are upgrading to Spring 2.0 from Spring 1.x and you are using this class you might have to change your Spring Web MVC configuration slightly. Refer to the class level Javadocs of the UrlFilenameViewController to see examples of the new contract for view name determination.

2.8. Updated sample applications

A number of the sample applications have also been updated to showcase the new and improved features of Spring 2.0, so do take the time to investigate them. The aforementioned sample applications can be found in the 'samples' directory of the full Spring distribution ('spring-with-dependencies.[zip|tar.gz]'), and are documented (in part) in the chapter entitled Chapter 26, Showcase applications.

2.9. Improved documentation

The Spring reference documentation has also substantially been updated to reflect all of the above features new in Spring 2.0. While every effort has been made to ensure that there are no errors in this documentation, some errors may nevertheless have crept in. If you do spot any typos or even more serious errors, and you can spare a few cycles during lunch, please do bring the error to the attention of the Spring team by raising an issue.

Special thanks to Arthur Loder for his tireless proofreading of the Spring Framework reference documentation and Javadocs.

Part I. Core Technologies

This initial part of the reference documentation covers all of those technologies that are absolutely integral to the Spring Framework.

Foremost amongst these is the Spring Framework's Inversion of Control (IoC) container. A thorough treatment of the Spring Framework's IoC container is closely followed by comprehensive coverage of Spring's Aspect-Oriented Programming (AOP) technologies. The Spring Framework has its own AOP framework, which is conceptually easy to understand, and which successfully addresses the 80% sweet spot of AOP requirements in Java enterprise programming.

Coverage of Spring's integration with AspectJ (currently the richest - in terms of features - and certainly most mature AOP implementation in the Java enterprise space) is also provided.

Finally, the adoption of the test-driven-development (TDD) approach to software development is certainly advocated by the Spring team, and so coverage of Spring's support for integration testing is covered (alongside best practices for unit testing). The Spring team have found that the correct use of IoC certainly does make both unit and integration testing easier (in that the presence of setter methods and appropriate constructors on classes makes them easier to wire together on a test without having to set up service locator registries and suchlike)... the chapter dedicated solely to testing will hopefully convince you of this as well.

Chapter 3. The IoC container

3.1. Introduction

This chapter covers the Spring Framework's implementation of the Inversion of Control (IoC) [1] principle.

The org.springframework.beans and org.springframework.context packages provide the basis for the Spring Framework's IoC container. The BeanFactory interface provides an advanced configuration mechanism capable of managing objects of any nature. The ApplicationContext interface builds on top of the BeanFactory (it is a sub-interface) and adds other functionality such as easier integration with Spring's AOP features, message resource handling (for use in internationalization), event propagation, and application-layer specific contexts such as the WebApplicationContext for use in web applications.

In short, the BeanFactory provides the configuration framework and basic functionality, while the ApplicationContext adds more enterprise-centric functionality to it. The ApplicationContext is a complete superset of the BeanFactory, and any description of BeanFactory capabilities and behavior is to be considered to apply to the ApplicationContext as well.

This chapter is divided into two parts, with the first part covering the basic principles that apply to both the BeanFactory and ApplicationContext, and with the second part covering those features that apply only to the ApplicationContext interface.

3.2. Basics - containers and beans

In Spring, those objects that form the backbone of your application and that are managed by the Spring IoC container are referred to as beans. A bean is simply an object that is instantiated, assembled and otherwise managed by a Spring IoC container; other than that, there is nothing special about a bean (it is in all other respects one of probably many objects in your application). These beans, and the dependencies between them, are reflected in the configuration metadata used by a container.

3.2.1. The container

The org.springframework.beans.factory.BeanFactory is the actual representation of the Spring IoC container that is responsible for containing and otherwise managing the aforementioned beans.

The BeanFactory interface is the central IoC container interface in Spring. Its responsibilities include instantiating or sourcing application objects, configuring such objects, and assembling the dependencies between these objects.

There are a number of implementations of the BeanFactory interface that come supplied straight out-of-the-box with Spring. The most commonly used BeanFactory implementation is the XmlBeanFactory class. This implementation allows you to express the objects that compose your application, and the doubtless rich interdependencies between such objects, in terms of XML. The XmlBeanFactory takes this XML configuration metadata and uses it to create a fully configured system or application.

The Spring IoC container

3.2.1.1. Configuration metadata

As can be seen in the above image, the Spring IoC container consumes some form of configuration metadata; this configuration metadata is nothing more than how you (as an application developer) inform the Spring container as to how to “instantiate, configure, and assemble [the objects in your application]”. This configuration metadata is typically supplied in a simple and intuitive XML format. When using XML-based configuration metadata, you write bean definitions for those beans that you want the Spring IoC container to manage, and then let the container do it's stuff.

[Note]Note

XML-based metadata is by far the most commonly used form of configuration metadata. It is not however the only form of configuration metadata that is allowed. The Spring IoC container itself is totally decoupled from the format in which this configuration metadata is actually written. At the time of writing, you can supply this configuration metadata using either XML, the Java properties format, or programmatically (using Spring's public API). The XML-based configuration metadata format really is simple though, and so the remainder of this chapter will use the XML format to convey key concepts and features of the Spring IoC container.

Please be advised that in the vast majority of application scenarios, explicit user code is not required to instantiate one or more instances of a Spring IoC container. For example, in a web application scenario, a simple eight (or so) lines of absolutely boilerplate J2EE web descriptor XML in the web.xml file of the application will typically suffice (see Section 3.8.4, “Convenient ApplicationContext instantiation for web applications”).

Spring configuration consists of at least one bean definition that the container must manage, but typically there will be more than one bean definition. When using XML-based configuration metadata, these beans are configured as <bean/> elements inside a top-level <beans/> element.

These bean definitions correspond to the actual objects that make up your application. Typically you will have bean definitions for your service layer objects, your data access objects (DAOs), presentation objects such as Struts Action instances, infrastructure objects such as Hibernate SessionFactory instances, JMS Queue references, etc. (the possibilities are of course endless, and are limited only by the scope and complexity of your application). (Typically one does not configure fine-grained domain objects in the container.)

Find below an example of the basic structure of XML-based configuration metadata.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd">

  <bean id="..." class="...">
    <!-- collaborators and configuration for this bean go here -->
  </bean>

  <bean id="..." class="...">
    <!-- collaborators and configuration for this bean go here -->
  </bean>

  <!-- more bean definitions go here... -->

</beans>

3.2.2. Instantiating a container

Instantiating a Spring IoC container is easy; find below some examples of how to do just that:

Resource resource = new FileSystemResource("beans.xml");
BeanFactory factory = new XmlBeanFactory(resource);

... or...

ClassPathResource resource = new ClassPathResource("beans.xml");
BeanFactory factory = new XmlBeanFactory(resource);

... or...

ApplicationContext context = new ClassPathXmlApplicationContext(
        new String[] {"applicationContext.xml", "applicationContext-part2.xml"});

// of course, an ApplicationContext is just a BeanFactory
BeanFactory factory = (BeanFactory) context;

3.2.2.1. Composing XML-based configuration metadata

It can often be useful to split up container definitions into multiple XML files. One way to then load an application context which is configured from all these XML fragments is to use the application context constructor which takes multiple Resource locations. With a bean factory, a bean definition reader can be used multiple times to read definitions from each file in turn.

Generally, the Spring team prefers the above approach, since it keeps container configuration files unaware of the fact that they are being combined with others. An alternate approach is to use one or more occurrences of the <import/> element to load bean definitions from another file (or files). Any <import/> elements must be placed before <bean/> elements in the file doing the importing. Let's look at a sample:

<beans>

    <import resource="services.xml"/>
    <import resource="resources/messageSource.xml"/>
    <import resource="/resources/themeSource.xml"/>

    <bean id="bean1" class="..."/>
    <bean id="bean2" class="..."/>

</beans>

In this example, external bean definitions are being loaded from 3 files, services.xml, messageSource.xml, and themeSource.xml. All location paths are considered relative to the definition file doing the importing, so services.xml in this case must be in the same directory or classpath location as the file doing the importing, while messageSource.xml and themeSource.xml must be in a resources location below the location of the importing file. As you can see, a leading slash is actually ignored, but given that these are considered relative paths, it is probably better form not to use the slash at all.

The contents of the files being imported must be fully valid XML bean definition files according to the Schema or DTD, including the top level <beans/> element.

3.2.3. The beans

As mentioned previously, a Spring IoC container manages one or more beans. These beans are created using the instructions defined in the configuration metadata that has been supplied to the container (typically in the form of XML <bean/> definitions).

Within the container itself, these bean definitions are represented as BeanDefinition objects, which contain (among other information) the following metadata:

  • a package-qualified class name: this is normally the actual implementation class of the bean being defined. However, if the bean is to be instantiated by invoking a static factory method instead of using a normal constructor, this will actually be the class name of the factory class.

  • bean behavioral configuration elements, which state how the bean should behave in the container (prototype or singleton, autowiring mode, initialization and destruction callbacks, and so forth).

  • constructor arguments and property values to set in the newly created bean. An example would be the number of connections to use in a bean that manages a connection pool (either specified as a property or as a constructor argument), or the pool size limit.

  • other beans which are needed for the bean to do its work, that is collaborators (also called dependencies).

The concepts listed above directly translate to a set of properties that each bean definition consists of. Some of these properties are listed below, along with a link to further documentation about each of them.


Besides bean definitions which contain information on how to create a specific bean, certain BeanFactory implementations also permit the registration of existing objects that have been created outside the factory (by user code). The DefaultListableBeanFactory class supports this through the registerSingleton(..) method. Typical applications solely work with beans defined through metadata bean definitions, though.

3.2.3.1. Naming beans

Every bean has one or more ids (also called identifiers, or names; these terms refer to the same thing). These ids must be unique within the container the bean is hosted in. A bean will almost always have only one id, but if a bean has more than one id, the extra ones can essentially be considered aliases.

When using XML-based configuration metadata, you use the 'id' or 'name' attributes to specify the bean identifier(s). The 'id' attribute allows you to specify exactly one id, and as it is a real XML element ID attribute, the XML parser is able to do some extra validation when other elements reference the id; as such, it is the preferred way to specify a bean id. However, the XML specification does limit the characters which are legal in XML IDs. This is usually not a constraint, but if you have a need to use one of these special XML characters, or want to introduce other aliases to the bean, you may also or instead specify one or more bean ids, separated by a comma (,), semicolon (;), or whitespace in the 'name' attribute.

Please note that you are not required to supply a name for a bean. If no name is supplied explicitly, the container will generate a (unique) name for that bean. The motivations for not supplying a name for a bean will be discussed later (one use case is inner beans).

3.2.3.1.1. Aliasing beans

In a bean definition itself, you may supply more than one name for the bean, by using a combination of up to one name specified via the id attribute, and any number of other names via the name attribute. All these names can be considered equivalent aliases to the same bean, and are useful for some situations, such as allowing each component used in an application to refer to a common dependency using a bean name that is specific to that component itself.

Having to specify all aliases when the bean is actually defined is not always adequate however. It is sometimes desirable to introduce an alias for a bean which is defined elsewhere. In XML-based configuration metadata this may be accomplished via the use of the standalone <alias/> element. For example:

<alias name="fromName" alias="toName"/>

In this case, a bean in the same container which is named 'fromName', may also after the use of this alias definition, be referred to as 'toName'.

As a concrete example, consider the case where component A defines a DataSource bean called componentA-dataSource, in its XML fragment. Component B would however like to refer to the DataSource as componentB-dataSource in its XML fragment. And the main application, MyApp, defines its own XML fragment and assembles the final application context from all three fragments, and would like to refer to the DataSource as myApp-dataSource. This scenario can be easily handled by adding to the MyApp XML fragment the following standalone aliases:

<alias name="componentA-dataSource" alias="componentB-dataSource"/>
<alias name="componentA-dataSource" alias="myApp-dataSource" />

Now each component and the main app can refer to the dataSource via a name that is unique and guaranteed not to clash with any other definition (effectively there is a namespace), yet they refer to the same bean.

3.2.3.2. Instantiating beans

A bean definition can be seen as a recipe for creating one or more actual objects. The container looks at the recipe for a named bean when asked, and uses the configuration metadata encapsulated by that bean definition to create (or acquire) an actual object.

If you are using XML-based configuration metadata, you can specify the type (or class) of object that is to be instantiated using the 'class' attribute of the <bean/> element. This 'class' attribute (which internally eventually boils down to being a Class property on a BeanDefinition instance) is normally mandatory (see Section 3.2.3.2.3, “Instantiation using an instance factory method” and Section 3.6, “Bean definition inheritance” for the two exceptions) and is used for one of two purposes. The class property specifies the class of the bean to be constructed in the much more common case where the container itself directly creates the bean by calling its constructor reflectively (somewhat equivalent to Java code using the 'new' operator). In the less common case where the container invokes a static, factory method on a class to create the bean, the class property specifies the actual class containing the static factory method that is to be invoked to create the object (the type of the object returned from the invocation of the static factory method may be the same class or another class entirely, it doesn't matter).

3.2.3.2.1. Instantiation using a constructor

When creating a bean using the constructor approach, all normal classes are usable by and compatible with Spring. That is, the class being created does not need to implement any specific interfaces or be coded in a specific fashion. Just specifying the bean class should be enough. However, depending on what type of IoC you are going to use for that specific bean, you may need a default (empty) constructor.

Additionally, the Spring IoC container isn't limited to just managing true JavaBeans, it is also able to manage virtually any class you want it to manage. Most people using Spring prefer to have actual JavaBeans (having just a default (no-argument) constructor and appropriate setters and getters modeled after the properties) in the container, but it is also possible to have more exotic non-bean-style classes in your container. If, for example, you need to use a legacy connection pool that absolutely does not adhere to the JavaBean specification, Spring can manage it as well.

When using XML-based configuration metadata you can specify your bean class like so:

<bean id="exampleBean" class="examples.ExampleBean"/>

<bean name="anotherExample" class="examples.ExampleBeanTwo"/>

The mechanism for supplying arguments to the constructor (if required), or setting properties of the object instance after it has been constructed, will be described shortly.

3.2.3.2.2. Instantiation using a static factory method

When defining a bean which is to be created using a static factory method, along with the class attribute which specifies the class containing the static factory method, another attribute named factory-method is needed to specify the name of the factory method itself. Spring expects to be able to call this method (with an optional list of arguments as described later) and get back a live object, which from that point on is treated as if it had been created normally via a constructor. One use for such a bean definition is to call static factories in legacy code.

The following example shows a bean definition which specifies that the bean is to be created by calling a factory-method. Note that the definition does not specify the type (class) of the returned object, only the class containing the factory method. In this example, the createInstance() method must be a static method.

<bean id="exampleBean"
      class="examples.ExampleBean2"
      factory-method="createInstance"/>

The mechanism for supplying (optional) arguments to the factory method, or setting properties of the object instance after it has been returned from the factory, will be described shortly.

3.2.3.2.3. Instantiation using an instance factory method

In a fashion similar to instantiation via a static factory method, instantiation using an instance factory method is where the factory method of an existing bean from the container is invoked to create the new bean.

To use this mechanism, the 'class' attribute must be left empty, and the 'factory-bean' attribute must specify the name of a bean in the current (or parent/ancestor) container that contains the factory method. The factory method itself must still be set via the 'factory-method' attribute (as seen in the example below).

<!-- the factory bean, which contains a method called createInstance() -->
<bean id="myFactoryBean" class="...">
  ...
</bean>

<!-- the bean to be created via the factory bean -->
<bean id="exampleBean"
      factory-bean="myFactoryBean"
      factory-method="createInstance"/>

Although the mechanisms for setting bean properties are still to be discussed, one implication of this approach is that the factory bean itself can be managed and configured via DI.

3.2.4. Using the container

A BeanFactory is essentially nothing more than the interface for an advanced factory capable of maintaining a registry of different beans and their dependencies. The BeanFactory enables you to read bean definitions and access them using the bean factory. When using just the BeanFactory you would create one and read in some bean definitions in the XML format as follows:

InputStream is = new FileInputStream("beans.xml");
BeanFactory factory = new XmlBeanFactory(is);

Basically that's all there is to it. Using getBean(String) you can retrieve instances of your beans; the client-side view of the BeanFactory is surprisingly simple. The BeanFactory interface has only six methods for client code to call:

  • boolean containsBean(String): returns true if the BeanFactory contains a bean definition or bean instance that matches the given name

  • Object getBean(String): returns an instance of the bean registered under the given name. Depending on how the bean was configured by the BeanFactory configuration, either a singleton and thus shared instance or a newly created bean will be returned. A BeansException will be thrown when either the bean could not be found (in which case it'll be a NoSuchBeanDefinitionException), or an exception occurred while instantiating and preparing the bean

  • Object getBean(String, Class): returns a bean, registered under the given name. The bean returned will be cast to the given Class. If the bean could not be cast, corresponding exceptions will be thrown (BeanNotOfRequiredTypeException). Furthermore, all rules of the getBean(String) method apply (see above)

  • Class getType(String name): returns the Class of the bean with the given name. If no bean corresponding to the given name could be found, a NoSuchBeanDefinitionException will be thrown

  • boolean isSingleton(String): determines whether or not the bean definition or bean instance registered under the given name is a singleton (bean scopes such as singleton are explained later). If no bean corresponding to the given name could be found, a NoSuchBeanDefinitionException will be thrown

  • String[] getAliases(String): Return the aliases for the given bean name, if any were defined in the bean definition

3.3. Dependencies

Your typical enterprise application is not made up of a single object (or bean in the Spring parlance). Even the simplest of applications will no doubt have at least a handful of objects that work together to present what the end-user sees as a coherent application. This next section explains how you go from defining a number of bean definitions that stand-alone, each to themselves, to a fully realized application where objects work (or collaborate) together to achieve some goal (usually an application that does what the end-user wants).

3.3.1. Injecting dependencies

The basic principle behind Dependency Injection (DI) is that objects define their dependencies (that is to say the other objects they work with) only through constructor arguments, arguments to a factory method, or properties which are set on the object instance after it has been constructed or returned from a factory method. Then, it is the job of the container to actually inject those dependencies when it creates the bean. This is fundamentally the inverse, hence the name Inversion of Control (IoC), of the bean itself being in control of instantiating or locating its dependencies on its own using direct construction of classes, or something like the Service Locator pattern.

It becomes evident upon usage that code gets much cleaner when the DI principle is applied, and reaching a higher grade of decoupling is much easier when beans do not look up their dependencies, but are provided with them (and additionally do not even know where the dependencies are located and of what actual class they are).

As touched on in the previous paragraph, DI exists in two major variants, namely Setter Injection, and Constructor Injection.

3.3.1.1. Setter Injection

Setter-based DI is realized by calling setter methods on your beans after invoking a no-argument constructor or no-argument static factory method to instantiate your bean.

Find below an example of a class that can only be dependency injected using pure setter injection. Note that there is nothing special about this class... it is plain old Java.

public class SimpleMovieLister {

    // the SimpleMovieLister has a dependency on the MovieFinder
    private MovieFinder movieFinder;

    // a setter method so that the Spring container can 'inject' a MovieFinder
    public void setMovieFinder(MovieFinder movieFinder) {
        this.movieFinder = movieFinder;
    }
    
    // business logic that actually 'uses' the injected MovieFinder is omitted...
}

3.3.1.2. Constructor Injection

Constructor-based DI is realized by invoking a constructor with a number of arguments, each representing a collaborator. Additionally, calling a static factory method with specific arguments to construct the bean, can be considered almost equivalent, and the rest of this text will consider arguments to a constructor and arguments to a static factory method similarly.

Find below an example of a class that could only be dependency injected using constructor injection. Again, note that there is nothing special about this class.

public class SimpleMovieLister {

    // the SimpleMovieLister has a dependency on the MovieFinder
    private MovieFinder movieFinder;

    // a constructor so that the Spring container can 'inject' a MovieFinder
    public SimpleMovieLister(MovieFinder movieFinder) {
        this.movieFinder = movieFinder;
    }
    
    // business logic that actually 'uses' the injected MovieFinder is omitted...
}

The BeanFactory supports both of these variants for injecting dependencies into beans it manages. (It in fact also supports injecting setter-based dependencies after some dependencies have already been supplied via the constructor approach.) The configuration for the dependencies comes in the form of a BeanDefinition, which is used together with PropertyEditor instances to know how to convert properties from one format to another. However, most users of Spring will not be dealing with these classes directly (that is programmatically), but rather with an XML definition file which will be converted internally into instances of these classes, and used to load an entire Spring IoC container instance.

Bean dependency resolution generally happens as follows:

  1. The BeanFactory is created and initialized with a configuration which describes all the beans. (Most Spring users use a BeanFactory or ApplicationContext implementation that supports XML format configuration files.)

  2. Each bean has dependencies expressed in the form of properties, constructor arguments, or arguments to the static-factory method when that is used instead of a normal constructor. These dependencies will be provided to the bean, when the bean is actually created.

  3. Each property or constructor argument is either an actual definition of the value to set, or a reference to another bean in the container.

  4. Each property or constructor argument which is a value must be able to be converted from whatever format it was specified in, to the actual type of that property or constructor argument. By default Spring can convert a value supplied in string format to all built-in types, such as int, long, String, boolean, etc.

The Spring container validates the configuration of each bean as the container is created, including the validation that properties which are bean references are actually referring to valid beans. However, the bean properties themselves are not set until the bean is actually created. For those beans that are singleton-scoped and set to be pre-instantiated (such as singleton beans in an ApplicationContext), creation happens at the time that the container is created, but otherwise this is only when the bean is requested. When a bean actually has to be created, this will potentially cause a graph of other beans to be created, as its dependencies and its dependencies' dependencies (and so on) are created and assigned.

You can generally trust Spring to do the right thing. It will detect mis-configuration issues, such as references to non-existent beans and circular dependencies, at container load-time. It will actually set properties and resolve dependencies as late as possible, which is when the bean is actually created. This means that a Spring container which has loaded correctly can later generate an exception when you request a bean if there is a problem creating that bean or one of its dependencies. This could happen if the bean throws an exception as a result of a missing or invalid property, for example. This potentially delayed visibility of some configuration issues is why ApplicationContext implementations by default pre-instantiate singleton beans. At the cost of some upfront time and memory to create these beans before they are actually needed, you find out about configuration issues when the ApplicationContext is created, not later. If you wish, you can still override this default behavior and set any of these singleton beans to lazy-initialize (that is not be pre-instantiated).

Finally, if it is not immediately apparent, it is worth mentioning that when one or more collaborating beans are being injected into a dependent bean, each collaborating bean is totally configured prior to being passed (via one of the DI flavors) to the dependent bean. This means that if bean A has a dependency on bean B, the Spring IoC container will totally configure bean B prior to invoking the setter method on bean A; you can read 'totally configure' to mean that the bean will be instantiated (if not a pre-instantiated singleton), all of its dependencies will be set, and the relevant lifecycle methods (such as a configured init method or the IntializingBean callback method) will all be invoked.

3.3.1.3. Some examples

First, an example of using XML-based configuration metadata for setter-based DI. Find below a small part of a Spring XML configuration file specifying some bean definitions.

<bean id="exampleBean" class="examples.ExampleBean">

  <!-- setter injection using the nested <ref/> element -->
  <property name="beanOne"><ref bean="anotherExampleBean"/></property>

  <!-- setter injection using the neater 'ref' attribute -->
  <property name="beanTwo" ref="yetAnotherBean"/>
  <property name="integerProperty" value="1"/>
</bean>

<bean id="anotherExampleBean" class="examples.AnotherBean"/>
<bean id="yetAnotherBean" class="examples.YetAnotherBean"/>
public class ExampleBean {

    private AnotherBean beanOne;
    private YetAnotherBean beanTwo;
    private int i;

    public void setBeanOne(AnotherBean beanOne) {
        this.beanOne = beanOne;
    }

    public void setBeanTwo(YetAnotherBean beanTwo) {
        this.beanTwo = beanTwo;
    }

    public void setIntegerProperty(int i) {
        this.i = i;
    }    
}

As you can see, setters have been declared to match against the properties specified in the XML file.

Now, an example of using constructor-based DI. Find below a snippet from an XML configuration that specifies constructor arguments, and the corresponding Java class.

<bean id="exampleBean" class="examples.ExampleBean">

  <!-- constructor injection using the nested <ref/> element -->
  <constructor-arg><ref bean="anotherExampleBean"/></constructor-arg>
  
  <!-- constructor injection using the neater 'ref' attribute -->
  <constructor-arg ref="yetAnotherBean"/>
  
  <constructor-arg type="int" value="1"/>
</bean>

<bean id="anotherExampleBean" class="examples.AnotherBean"/>
<bean id="yetAnotherBean" class="examples.YetAnotherBean"/>
public class ExampleBean {

    private AnotherBean beanOne;
    private YetAnotherBean beanTwo;
    private int i;
    
    public ExampleBean(
        AnotherBean anotherBean, YetAnotherBean yetAnotherBean, int i) {
        this.beanOne = anotherBean;
        this.beanTwo = yetAnotherBean;
        this.i = i;
    }
}

As you can see, the constructor arguments specified in the bean definition will be used to pass in as arguments to the constructor of the ExampleBean.

Now consider a variant of this where instead of using a constructor, Spring is told to call a static factory method to return an instance of the object:

<bean id="exampleBean" class="examples.ExampleBean"
      factory-method="createInstance">
  <constructor-arg ref="anotherExampleBean"/>
  <constructor-arg ref="yetAnotherBean"/>
  <constructor-arg value="1"/> 
</bean>

<bean id="anotherExampleBean" class="examples.AnotherBean"/>
<bean id="yetAnotherBean" class="examples.YetAnotherBean"/>
public class ExampleBean {

    // a private constructor
    private ExampleBean(...) {
      ...
    }
    
    // a static factory method; the arguments to this method can be
    // considered the dependencies of the bean that is returned,
    // regardless of how those arguments are actually used.
    public static ExampleBean createInstance (
            AnotherBean anotherBean, YetAnotherBean yetAnotherBean, int i) {

        ExampleBean eb = new ExampleBean (...);
        // some other operations...
        return eb;
    }
}

Note that arguments to the static factory method are supplied via constructor-arg elements, exactly the same as if a constructor had actually been used. Also, it is important to realize that the type of the class being returned by the factory method does not have to be of the same type as the class which contains the static factory method, although in this example it is. An instance (non-static) factory method would be used in an essentially identical fashion (aside from the use of the factory-bean attribute instead of the class attribute), so details will not be discussed here.

3.3.2. Constructor Argument Resolution

Constructor argument resolution matching occurs using the argument's type. If there is no potential for ambiguity in the constructor arguments of a bean definition, then the order in which the constructor arguments are defined in a bean definition is the order in which those arguments will be supplied to the appropriate constructor when it is being instantiated. Consider the following class:

package x.y;

public class Foo {

    public Foo(Bar bar, Baz baz) {
        // ...
    }
}

There is no potential for ambiguity here (assuming of course that Bar and Baz classes are not related in an inheritance hierarchy). Thus the following configuration will work just fine, and you do not need to specify the constructor argument indexes and / or types explicitly.

<beans>
    <bean name="foo" class="x.y.Foo">
        <constructor-arg>
            <bean class="x.y.Bar"/>
        </constructor-arg>
        <constructor-arg>
            <bean class="x.y.Baz"/>
        </constructor-arg>
    </bean>
</beans>

When another bean is referenced, the type is known, and matching can occur (as was the case with the preceding example). When a simple type is used, such as <value>true<value>, Spring cannot determine the type of the value, and so cannot match by type without help. Consider the following class:

package examples;

public class ExampleBean {

    // No. of years to the calculate the Ultimate Answer
    private int years;
    
    // The Answer to Life, the Universe, and Everything
    private String ultimateAnswer;

    public ExampleBean(int years, String ultimateAnswer) {
        this.years = years;
        this.ultimateAnswer = ultimateAnswer;
    }
}

3.3.2.1. Constructor Argument Type Matching

The above scenario can use type matching with simple types by explicitly specifying the type of the constructor argument using the 'type' attribute. For example:

<bean id="exampleBean" class="examples.ExampleBean">
  <constructor-arg type="int" value="7500000"/>
  <constructor-arg type="java.lang.String" value="42"/>
</bean>

3.3.2.2. Constructor Argument Index

Constructor arguments can have their index specified explicitly by use of the index attribute. For example:

<bean id="exampleBean" class="examples.ExampleBean">
  <constructor-arg index="0" value="7500000"/>
  <constructor-arg index="1" value="42"/>
</bean>

As well as solving the ambiguity problem of multiple simple values, specifying an index also solves the problem of ambiguity where a constructor may have two arguments of the same type. Note that the index is 0 based.

3.3.3. Bean properties and constructor arguments detailed

As mentioned in the previous section, bean properties and constructor arguments can be defined as either references to other managed beans (collaborators), or values defined inline. Spring's XML-based configuration metadata supports a number of sub-element types within its <property/> and <constructor-arg/> elements for just this purpose.

3.3.3.1. Straight values (primitives, Strings, etc.)

The <value/> element specifies a property or constructor argument as a human-readable string representation. As mentioned previously, JavaBeans PropertyEditors are used to convert these string values from a String to the actual type of the property or argument.

<bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
  
  <!-- results in a setDriverClassName(String) call -->
  <property name="driverClassName">
    <value>com.mysql.jdbc.Driver</value>
  </property>
  <property name="url">
    <value>jdbc:mysql://localhost:3306/mydb</value>
  </property>
  <property name="username">
    <value>root</value>
  </property>
  <property name="password">
    <value>masterkaoli</value>
  </property>
</bean>

The <property/> and <constructor-arg/> elements also support the use of the 'value' attribute, which can lead to much more succinct configuration. When using the 'value' attribute, the above bean definition reads like so:

<bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
  
  <!-- results in a setDriverClassName(String) call -->
  <property name="driverClassName" value="com.mysql.jdbc.Driver"/>
  <property name="url" value="jdbc:mysql://localhost:3306/mydb"/>
  <property name="username" value="root"/>
  <property name="password" value="masterkaoli"/>
</bean>

The Spring team generally prefer the attribute style over the use of nested <value/> elements. If you are reading this reference manual straight through from top to bottom (wow!) then we are getting slightly ahead of ourselves here, but you can also configure a java.util.Properties instance like so:

<bean id="mappings" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
            
   <!-- typed as a java.util.Properties -->
   <property name="properties">
      <value>
         jdbc.driver.className=com.mysql.jdbc.Driver
         jdbc.url=jdbc:mysql://localhost:3306/mydb
      </value>
   </property>
</bean>

Can you see what is happening? The Spring container is converting the text inside the <value/> element into a java.util.Properties instance using the JavaBeans PropertEditor mechanism. This is a nice shortcut, and is one of a few places where the Spring team do favor the use of the nested <value/> element over the 'value' attribute style.

3.3.3.1.1. The idref element

The idref element is simply an error-proof way to pass the id of another bean in the container (to a <constructor-arg/> or <property/> element).

<bean id="theTargetBean" class="..."/>

<bean id="theClientBean" class="...">
    <property name="targetName">
        <idref bean="theTargetBean" />
    </property>
</bean>

The above bean definition snippet is exactly equivalent (at runtime) to the following snippet:

<bean id="theTargetBean" class="..."/>

<bean id="client" class="...">
    <property name="targetName">
        <value>theTargetBean</value>
    </property>
</bean>

The main reason the first form is preferable to the second is that using the idref tag allows the container to validate at deployment time that the referenced, named bean actually exists. In the second variation, no validation is performed on the value that is passed to the 'targetName' property of the 'client' bean. Any typo will only be discovered (with most likely fatal results) when the 'client' bean is actually instantiated. If the 'client' bean is a prototype bean, this typo (and the resulting exception) may only be discovered long after the container is actually deployed.

Additionally, if the bean being referred to is in the same XML unit, and the bean name is the bean id, the 'local' attribute may be used, which allows the XML parser itself to validate the bean id even earlier, at XML document parse time.

<property name="targetName">
   <!-- a bean with an id of 'theTargetBean' must exist,
                otherwise an XML exception will be thrown -->
   <idref local="theTargetBean"/>
</property>

By way of an example, one common place (at least in pre-Spring 2.0 configuration) where the <idref/> element brings value is in the configuration of AOP interceptors in a ProxyFactoryBean bean definition. If you use <idref/> elements when specifying the interceptor names, there is no chance of inadvertently misspelling an interceptor id.

3.3.3.2. References to other beans (collaborators)

The ref element is the final element allowed inside a <constructor-arg/> or <property/> definition element. It is used to set the value of the specified property to be a reference to another bean managed by the container (a collaborator). As mentioned in a previous section, the referred-to bean is considered to be a dependency of the bean who's property is being set, and will be initialized on demand as needed (if it is a singleton bean it may have already been initialized by the container) before the property is set. All references are ultimately just a reference to another object, but there are 3 variations on how the id/name of the other object may be specified, which determines how scoping and validation is handled.

Specifying the target bean by using the bean attribute of the <ref/> tag is the most general form, and will allow creating a reference to any bean in the same container (whether or not in the same XML file), or parent container. The value of the 'bean' attribute may be the same as either the 'id' attribute of the target bean, or one of the values in the 'name' attribute of the target bean.

<ref bean="someBean"/>

Specifying the target bean by using the local attribute leverages the ability of the XML parser to validate XML id references within the same file. The value of the local attribute must be the same as the id attribute of the target bean. The XML parser will issue an error if no matching element is found in the same file. As such, using the local variant is the best choice (in order to know about errors are early as possible) if the target bean is in the same XML file.

<ref local="someBean"/>

Specifying the target bean by using the 'parent' attribute allows a reference to be created to a bean which is in a parent container of the current container. The value of the 'parent' attribute may be the same as either the 'id' attribute of the target bean, or one of the values in the 'name' attribute of the target bean, and the target bean must be in a parent container to the current one. The main use of this bean reference variant is when you have a hierarchy of containers and you want to wrap an existing bean in a parent container with some sort of proxy which will have the same name as the parent bean.

<!-- in the parent context -->
<bean id="accountService" class="com.foo.SimpleAccountService">
    <!-- insert dependencies as required as here -->
</bean>
<!-- in the child (descendant) context -->
<bean id="accountService"  <-- notice that the name of this bean is the same as the name of the 'parent' bean
      class="org.springframework.aop.framework.ProxyFactoryBean">
      <property name="target">
          <ref parent="accountService"/>  <-- notice how we refer to the parent bean
      </property>
    <!-- insert other configuration and dependencies as required as here -->
</bean>

3.3.3.3. Inner beans

A <bean/> element inside the <property/> or <constructor-arg/> elements is used to define a so-called inner bean. An inner bean definition does not need to have any id or name defined, and it is best not to even specify any id or name value because the id or name value simply will be ignored by the container.

<bean id="outer" class="...">
  <!-- instead of using a reference to a target bean, simply define the target bean inline -->
  <property name="target">
    <bean class="com.mycompany.Person"> <!-- this is the inner bean -->
      <property name="name" value="Fiona Apple"/>
      <property name="age" value="25"/>
    </bean>
  </property>
</bean>

Note that in the specific case of inner beans, the 'scope' flag and any 'id' or 'name' attribute are effectively ignored. Inner beans are always anonymous and they are always scoped as prototypes. Please also note that it is not possible to inject inner beans into collaborating beans other than the enclosing bean.

3.3.3.4. Collections

The <list/>, <set/>, <map/>, and <props/> elements allow properties and arguments of the Java Collection type List, Set, Map, and Properties, respectively, to be defined and set.

<bean id="moreComplexObject" class="example.ComplexObject">
  <!-- results in a setAdminEmails(java.util.Properties) call -->
  <property name="adminEmails">
    <props>
        <prop key="administrator">administrator@somecompany.org</prop>
        <prop key="support">support@somecompany.org</prop>
        <prop key="development">development@somecompany.org</prop>
    </props>
  </property>
  <!-- results in a setSomeList(java.util.List) call -->
  <property name="someList">
    <list>
        <value>a list element followed by a reference</value>
        <ref bean="myDataSource" />
    </list>
  </property>
  <!-- results in a setSomeMap(java.util.Map) call -->
  <property name="someMap">
    <map>
        <entry>
            <key>
                <value>yup an entry</value>
            </key>
            <value>just some string</value>
        </entry>
        <entry>
            <key>
                <value>yup a ref</value>
            </key>
            <ref bean="myDataSource" />
        </entry>
    </map>
  </property>
  <!-- results in a setSomeSet(java.util.Set) call -->
  <property name="someSet">
    <set>
        <value>just some string</value>
        <ref bean="myDataSource" />
    </set>
  </property>
</bean>

Note that the value of a map key or value, or a set value, can also again be any of the following elements:

bean | ref | idref | list | set | map | props | value | null
3.3.3.4.1. Collection merging

As of Spring 2.0, the container also supports the merging of collections. This allows an application developer to define a parent-style <list/>, <map/>, <set/> or <props/> element, and have child-style <list/>, <map/>, <set/> or <props/> elements inherit and override values from the parent collection; that is to say the child collection's values will be the result obtained from the merging of the elements of the parent and child collections, with the child's collection elements overriding values specified in the parent collection.

Please note that this section on merging makes use of the parent-child bean mechanism. This concept has not yet been introduced, so readers unfamiliar with the concept of parent and child bean definitions may wish to read the relevant section before continuing.

Find below an example of the collection merging feature:

<beans>
<bean id="parent" abstract="true" class="example.ComplexObject">
    <property name="adminEmails">
        <props>
            <prop key="administrator">administrator@somecompany.com</prop>
            <prop key="support">support@somecompany.com</prop>
        </props>
    </property>
</bean>
<bean id="child" parent="parent">
    <property name="adminEmails">
        <!-- the merge is specified on the *child* collection definition -->
        <props merge="true">
            <prop key="sales">sales@somecompany.com</prop>
            <prop key="support">support@somecompany.co.uk</prop>
        </props>
    </property>
</bean>
<beans>

Notice the use of the merge=true attribute on the <props/> element of the adminEmails property of the child bean definition. When the child bean is actually resolved and instantiated by the container, the resulting instance will have an adminEmails Properties collection that contains the result of the merging of the child's adminEmails collection with the parent's adminEmails collection.

administrator=administrator@somecompany.com
sales=sales@somecompany.com
support=support@somecompany.co.uk

Notice how the child Properties collection's value set will have inherited all the property elements from the parent <props/>. Notice also how the child's value for the support value overrides the value in the parent collection.

This merging behavior applies similarly to the <list/>, <map/>, and <set/> collection types. In the specific case of the <list/> element, the semantics associated with the List collection type, that is the notion of an ordered collection of values, is maintained; the parent's values will precede all of the child list's values. In the case of the Map, Set, and Properties collection types, there is no notion of ordering and hence no ordering semantics are in effect for the collection types that underlie the associated Map, Set and Properties implementation types used internally by the container.

Finally, some minor notes about the merging support are in order; you cannot merge different collection types (e.g. a Map and a List), and if you do attempt to do so an appropriate Exception will be thrown; and in case it is not immediately obvious, the 'merge' attribute must be specified on the lower level, inherited, child definition; specifying the 'merge' attribute on a parent collection definition is redundant and will not result in the desired merging; and (lastly), please note that this merging feature is only available in Spring 2.0 (and later versions).

3.3.3.4.2. Strongly-typed collection (Java5+ only)

If you are one of the lucky few to be using Java5 (Tiger), you will be aware that it is possible to have strongly typed collections. That is, it is possible to declare a Collection type such that it can only contain String elements (for example). If you are using Spring to dependency inject a strongly-typed Collection into a bean, you can take advantage of Spring's type-conversion support such that the elements of your strongly-typed Collection instances will be converted to the appropriate type prior to being added to the Collection.

public class Foo {
                
    private Map<String, Float> accounts;
    
    public void setAccounts(Map<String, Float> accounts) {
        this.accounts = accounts;
    }
}
<beans>
    <bean id="foo" class="x.y.Foo">
        <property name="accounts">
            <map>
                <entry key="one" value="9.99"/>
                <entry key="two" value="2.75"/>
                <entry key="six" value="3.99"/>
            </map>
        </property>
    </bean>
</beans>

When the 'accounts' property of the 'foo' bean is being prepared for injection, the generics information about the element type of the strongly-typed Map<String, Float> is actually available via reflection, and so Spring's type conversion infrastructure will actually recognize the various value elements as being of type Float and so the string values '9.99', '2.75', and '3.99' will be converted into an actual Float type.

3.3.3.5. Nulls

The <null/> element is used to handle null values. Spring treats empty arguments for properties and the like as empty Strings. The following XML-based configuration metadata snippet results in the email property being set to the empty String value ("")

<bean class="ExampleBean">
  <property name="email"><value/></property>
</bean>

This is equivalent to the following Java code: exampleBean.setEmail(""). The special <null> element may be used to indicate a null value. For example:

<bean class="ExampleBean">
  <property name="email"><null/></property>
</bean>

The above configuration is equivalent to the following Java code: exampleBean.setEmail(null).

3.3.3.6. Shortcuts and other convenience options for XML-based configuration metadata

The configuration metadata shown so far is a tad verbose. That is why there are several options available for you to limit the amount of XML you have to write to configure your components. The first is a shortcut to define values and references to other beans as part of a <property/> definition. The second is slightly different format of specifying properties alltogether.

3.3.3.6.1. XML-based configuration metadata shortcuts

The <property/>, <constructor-arg/>, and <entry/> elements all support a 'value' attribute which may be used instead of embedding a full <value/> element. Therefore, the following:

<property name="myProperty">
  <value>hello</value>
</property>
<constructor-arg>
  <value>hello</value>
</constructor-arg>
<entry key="myKey">
  <value>hello</value>
</entry>

are equivalent to:

<property name="myProperty" value="hello"/>
<constructor-arg value="hello"/>
<entry key="myKey" value="hello"/>

The <property/> and <constructor-arg/> elements support a similar shortcut 'ref' attribute which may be used instead of a full nested <ref/> element. Therefore, the following:

<property name="myProperty">
  <ref bean="myBean">
</property>
<constructor-arg>
  <ref bean="myBean">
</constructor-arg>

... are equivalent to:

<property name="myProperty" ref="myBean"/>
<constructor-arg ref="myBean"/>

Note however that the shortcut form is equivalent to a <ref bean="xxx"> element; there is no shortcut for <ref local="xxx">. To enforce a strict local reference, you must use the long form.

Finally, the entry element allows a shortcut form to specify the key and/or value of the map, in the form of the 'key' / 'key-ref' and 'value' / 'value-ref' attributes. Therefore, the following:

<entry>
  <key>
    <ref bean="myKeyBean" />
  </key>
  <ref bean="myValueBean" />
</entry>

is equivalent to:

<entry key-ref="myKeyBean" value-ref="myValueBean"/>

Again, the shortcut form is equivalent to a <ref bean="xxx"> element; there is no shortcut for <ref local="xxx">.

3.3.3.6.2. The p-namespace and how to use it to configure properties

The second option you have to limit the amount of XML you have to write to configure your components is to use the special "p-namespace". Spring 2.0 and later features support for extensible configuration formats using namespaces. Those namespaces are all based on an XML Schema definition. In fact, the beans configuration format that you've been reading about is defined in an XML Schema document.

One special namespace is not defined in an XSD file, and only exists in the core of Spring itself. The so-called p-namespace doesn't need a schema definition and is an alternative way of configuring your properties differently than the way you have seen so far. Instead of using nested property elements, using the p-namespace you can use attributes as part of the bean element that describe your property values. The values of the attributes will be taken as the values for your properties.

The following two XML snippets boil down to the same thing in the end: the first is using the format you're familiar with (the property elements) whereas the second example is using the p-namespace

<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:p="http://www.springframework.org/schema/p"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.springframework.org/schema/beans/spring-beans-2.0.xsd">
    
    <bean name="classic" class="com.mycompany.ExampleBean">
        <property name="email" value="foo@bar.com/>
    </bean>
    
    <bean name="p-namespace" 
        class="com.mycompany.ExampleBean"
        p:email="foo@bar.com"/>
</beans>

As you can see, we are including an attribute from the p-namespace called email in the bean definition. This is telling Spring that it should include a property declaration. As previously mentioned, the p-namespace doesn't have a schema definition, so the name of the attribute can be set to whatever name your property has.

This next example includes two more bean definitions that both have a reference to another bean:

<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:p="http://www.springframework.org/schema/p"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.springframework.org/schema/beans/spring-beans-2.0.xsd">
    
    <bean name="john-classic" class="com.mycompany.Person">
        <property name="name" value="John Doe"/>
        <property name="spouse" ref="jane"/>
    </bean>

    <bean name="john-modern" 
        class="com.mycompany.Person"
        p:name="John Doe"
        p:spouse-ref="jane"/>

    <bean name="jane" class="com.mycompany.Person">
        <property name="name" value="Jane Doe"/>
    </bean>
</beans>

As you can see, this example doesn't only include a property value using the p-namespace, but also uses a special format to declare property references. Whereas the first bean definition uses <property name="spouse" ref="jane"/> to create a reference from bean john to bean jane, the second bean definition uses p:spouse-ref="jane" as an attribute to do the exact same thing. In this case spouse is the property name whereas the -ref part tells Spring this is not a value but a bean reference.

[Note]Note

Note that we recommend you to choose carefully which approach you are going to use in your project. You should also communicate this to your team members so you won't end up with XML documents using all three approaches at the same time. This will prevent people from not understanding the application because of different ways of configuring it, and will add to the consistency of your codebase. Also note that this functionality is only available as of Spring 2.0.

3.3.3.7. Compound property names

Compound or nested property names are perfectly legal when setting bean properties, as long as all components of the path except the final property name are not null. For example, in this bean definition:

<bean id="foo" class="foo.Bar">
  <property name="fred.bob.sammy" value="123" />
</bean>

The foo bean has a fred property which has a bob property, which has a sammy property, and that final sammy property is being set to the value 123. In order for this to work, the fred property of foo, and the bob property of fred must both be non-null after the bean is constructed, or a NullPointerException will be thrown.

3.3.4. Using depends-on

For most situations, the fact that a bean is a dependency of another is expressed by the fact that one bean is set as a property of another. This is typically accomplished with the <ref/> element in XML-based configuration metadata. For the relatively infrequent situations where dependencies between beans are less direct (for example, when a static initializer in a class needs to be triggered, such as database driver registration), the 'depends-on' attribute may be used to explicitly force one or more beans to be initialized before the bean using this element is initialized. Find below an example of using the 'depends-on' attribute to express a dependency on a single bean.

<bean id="beanOne" class="ExampleBean" depends-on="manager"/>

<bean id="manager" class="ManagerBean" />

If you need to express a dependency on multiple beans, you can supply a list of bean names as the value of the 'depends-on' attribute, with commas, whitespace and semi-colons all valid delimiters, like so:

<bean id="beanOne" class="ExampleBean" depends-on="manager,accountDao">
  <property name="manager" ref="manager" />
</bean>

<bean id="manager" class="ManagerBean" />
<bean id="accountDao" class="x.y.jdbc.JdbcAccountDao" />
[Note]Note

The 'depends-on' attribute and property is used not only to specify an initialization time dependency, but also to specify the corresponding destroy time dependency (in the case of singleton beans only). Dependant beans that are defined in the 'depends-on' attribute will be destroyed first prior to the relevant bean itself being destroyed. This thus allows you to control shutdown order too.

3.3.5. Lazily-instantiated beans

The default behavior for ApplicationContext implementations is to eagerly pre-instantiate all singleton beans at startup. Pre-instantiation means that an ApplicationContext will eagerly create and configure all of its singleton beans as part of its initialization process. Generally this is a good thing, because it means that any errors in the configuration or in the surrounding environment will be discovered immediately (as opposed to possibly hours or even days down the line).

However, there are times when this behavior is not what is wanted. If you do not want a singleton bean to be pre-instantiated when using an ApplicationContext, you can selectively control this by marking a bean definition as lazy-initialized. A lazily-initialized bean indicates to the IoC container whether or not a bean instance should be created at startup or when it is first requested.

When configuring beans via XML, this lazy loading is controlled by the 'lazy-init' attribute on the <bean/> element; for example:

<bean id="lazy" class="com.foo.ExpensiveToCreateBean" lazy-init="true"/>

<bean name="not.lazy" class="com.foo.AnotherBean"/>

When the above configuration is consumed by an ApplicationContext, the bean named 'lazy' will not be eagerly pre-instantiated when the ApplicationContext is starting up, whereas the 'not.lazy' bean will be eagerly pre-instantiated.

One thing to understand about lazy-initialization is that even though a bean definition may be marked up as being lazy-initialized, if the lazy-initialized bean is the dependency of a singleton bean that is not lazy-initialized, when the ApplicationContext is eagerly pre-instantiating the singleton, it will have to satisfy all of the singletons dependencies, one of which will be the lazy-initialized bean! So don't be confused if the IoC container creates one of the beans that you have explicitly configured as lazy-initialized at startup; all that means is that the lazy-initialized bean is being injected into a non-lazy-initialized singleton bean elsewhere.

It is also possible to control lazy-initialization at the container level by using the 'default-lazy-init' attribute on the <beans/> element; for example:

<beans default-lazy-init="true">
    <!-- no beans will be pre-instantiated... -->
</beans>

3.3.6. Autowiring collaborators

The Spring container is able to autowire relationships between collaborating beans. This means that it is possible to automatically let Spring resolve collaborators (other beans) for your bean by inspecting the contents of the BeanFactory. The autowiring functionality has five modes. Autowiring is specified per bean and can thus be enabled for some beans, while other beans will not be autowired. Using autowiring, it is possible to reduce or eliminate the need to specify properties or constructor arguments, thus saving a significant amount of typing. [2] When using XML-based configuration metadata, the autowire mode for a bean definition is specified by using the autowire attribute of the <bean/> element. The following values are allowed:

Table 3.2. Autowiring modes

ModeExplanation
no

No autowiring at all. Bean references must be defined via a ref element. This is the default, and changing this is discouraged for larger deployments, since explicitly specifying collaborators gives greater control and clarity. To some extent, it is a form of documentation about the structure of a system.

byName

Autowiring by property name. This option will inspect the container and look for a bean named exactly the same as the property which needs to be autowired. For example, if you have a bean definition which is set to autowire by name, and it contains a master property (that is, it has a setMaster(..) method), Spring will look for a bean definition named master, and use it to set the property.

byType

Allows a property to be autowired if there is exactly one bean of the property type in the container. If there is more than one, a fatal exception is thrown, and this indicates that you may not use byType autowiring for that bean. If there are no matching beans, nothing happens; the property is not set. If this is not desirable, setting the dependency-check="objects" attribute value specifies that an error should be thrown in this case.

constructor

This is analogous to byType, but applies to constructor arguments. If there isn't exactly one bean of the constructor argument type in the container, a fatal error is raised.

autodetect

Chooses constructor or byType through introspection of the bean class. If a default constructor is found, the byType mode will be applied.


Note that explicit dependencies in property and constructor-arg settings always override autowiring. Please also note that it is not currently possible to autowire so-called simple properties such as primitives, Strings, and Classes (and arrays of such simple properties).(This is by-design and should be considered a feature.) Autowire behavior can be combined with dependency checking, which will be performed after all autowiring has been completed.

It is important to understand the various advantages and disadvantages of autowiring. Some advantages of autowiring include:

  • Autowiring can significantly reduce the volume of configuration required. However, mechanisms such as the use of a bean template (discussed elsewhere in this chapter) are also valuable in this regard.

  • Autowiring can cause configuration to keep itself up to date as your objects evolve. For example, if you need to add an additional dependency to a class, that dependency can be satisfied automatically without the need to modify configuration. Thus there may be a strong case for autowiring during development, without ruling out the option of switching to explicit wiring when the code base becomes more stable.

Some disadvantages of autowiring:

  • Autowiring is more magical than explicit wiring. Although, as noted in the above table, Spring is careful to avoid guessing in case of ambiguity which might have unexpected results, the relationships between your Spring-managed objects are no longer documented explicitly.

  • Wiring information may not be available to tools that may generate documentation from a Spring container.

  • Autowiring by type will only work when there is a single bean definition of the type specified by the setter method or constructor argument. You need to use explicit wiring if there is any potential ambiguity.

There is no wrong or right answer in all cases. A degree of consistency across a project is best though; for example, if autowiring is not used in general, it might be confusing to developers to use it just to wire one or two bean definitions.

3.3.6.1. Excluding a bean from being available for autowiring

You can also (on a per-bean basis) totally exclude a bean from being an autowire candidate. When configuring beans using Spring's XML format, the 'autowire-candidate' attribute of the <bean/> element can be set to 'false'; this has the effect of making the container totally exclude that specific bean definition from being available to the autowiring infrastructure.

This can be useful when you have a bean that you absolutely never ever want to have injected into other beans via autowiring. It does not mean that the excluded bean cannot itself be configured using autowiring... it can, it is rather that it itself will not be considered as a candidate for autowiring other beans.

3.3.7. Checking for dependencies

The Spring IoC container also has the ability to check for the existence of unresolved dependencies of a bean deployed into the container. These are JavaBeans properties of the bean, which do not have actual values set for them in the bean definition, or alternately provided automatically by the autowiring feature.

This feature is sometimes useful when you want to ensure that all properties (or all properties of a certain type) are set on a bean. Of course, in many cases a bean class will have default values for many properties, or some properties do not apply to all usage scenarios, so this feature is of limited use. Dependency checking can also be enabled and disabled per bean, just as with the autowiring functionality. The default is to not check dependencies. Dependency checking can be handled in several different modes. When using XML-based configuration metadata, this is specified via the 'dependency-check' attribute in a bean definition, which may have the following values.

Table 3.3. Dependency checking modes

ModeExplanation
none

No dependency checking. Properties of the bean which have no value specified for them are simply not set.

simple

Dependency checking is performed for primitive types and collections (everything except collaborators).

object

Dependency checking is performed for collaborators only.

all

Dependency checking is done for collaborators, primitive types and collections.


If you are using Java 5 (Tiger) and thus have access to source level annotations, you may find the section entitled Section 25.3.1, “@Required to be of interest.

3.3.8. Method Injection

For most application scenarios, the majority of the beans in the container will be singletons. When a singleton bean needs to collaborate with another singleton bean, or a non-singleton bean needs to collaborate with another non-singleton bean, the typical and common approach of handling this dependency by defining one bean to be a property of the other is quite adequate. There is a problem when the bean lifecycles are different. Consider a singleton bean A which needs to use a non-singleton (prototype) bean B, perhaps on each method invocation on A. The container will only create the singleton bean A once, and thus only get the opportunity to set the properties once. There is no opportunity for the container to provide bean A with a new instance of bean B every time one is needed.

One solution to this issue is to forgo some inversion of control. Bean A can be made aware of the container by implementing the BeanFactoryAware interface, and use programmatic means to ask the container via a getBean("B") call for (a typically new) bean B instance every time it needs it. Find below an admittedly somewhat contrived example of this approach:

// a class that uses a stateful Command-style class to perform some processing
package fiona.apple;

// lots of Spring-API imports
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;

public class CommandManager implements BeanFactoryAware {

   private BeanFactory beanFactory;

   public Object process(Map commandState) {
      // grab a new instance of the appropriate Command
      Command command = createCommand();
      // set the state on the (hopefully brand new) Command instance
      command.setState(commandState);
      return command.execute();
   }

   // the Command returned here could be an implementation that executes asynchronously, or whatever
   protected Command createCommand() {
      return (Command) this.beanFactory.getBean("command"); // notice the Spring API dependency
   }

   public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
      this.beanFactory = beanFactory;
   }
}

The above example is generally is not a desirable solution since the business code is then aware of and coupled to the Spring Framework. Method Injection, a somewhat advanced feature of the Spring IoC container, allows this use case to be handled in a clean fashion.

3.3.8.1. Lookup method injection

Lookup method injection refers to the ability of the container to override methods on container managed beans, to return the result of looking up another named bean in the container. The lookup will typically be of a prototype bean as in the scenario described above. The Spring Framework implements this method injection by dynamically generating a subclass overriding the method, using bytecode generation via the CGLIB library.

So if you look at the code from previous code snippet (the CommandManager class), the Spring container is going to dynamically override the implementation of the createCommand() method. Your CommandManager class is not going to have any Spring dependencies, as can be seen in this reworked example below:

package fiona.apple;

// no more Spring imports! 

public abstract class CommandManager {

   public Object process(Object commandState) {
      // grab a new instance of the appropriate Command interface
      Command command = createCommand();
      // set the state on the (hopefully brand new) Command instance
      command.setState(commandState);
      return command.execute();
   }

    // okay... but where is the implementation of this method?
   protected abstract Command createCommand();
}

In the client class containing the method to be injected (the CommandManager in this case), the method that is to be 'injected' must have a signature of the following form:

<public|protected> [abstract] <return-type> theMethodName(no-arguments);

If the method is abstract, the dynamically-generated subclass will implement the method. Otherwise, the dynamically-generated subclass will override the concrete method defined in the original class. Let's look at an example:

<!-- a stateful bean deployed as a prototype (non-singleton) -->
<bean id="command" class="fiona.apple.AsyncCommand" scope="prototype">
  <!-- inject dependencies here as required -->
</bean>

<!-- commandProcessor uses statefulCommandHelper -->
<bean id="commandManager" class="fiona.apple.CommandManager">
  <lookup-method name="createCommand" bean="command"/>
</bean>

The bean identified as commandManager will call its own method createCommand() whenever it needs a new instance of the command bean. It is important to note that the person deploying the beans must be careful to deploy the command bean as a prototype (if that is actually what is needed). If it is deployed as a singleton, the same instance of the command bean will be returned each time!

Please be aware that in order for this dynamic subclassing to work, you will need to have the CGLIB jar(s) on your classpath. Additionally, the class that the Spring container is going to subclass cannot be final, and the method that is being overridden cannot be final either. Also, testing a class that has an abstract method can be somewhat odd in that you will have to subclass the class yourself and supply a stub implementation of the abstract method. Finally, objects that have been the target of method injection cannot be serialized.

[Tip]Tip

The interested reader may also find the ServiceLocatorFactoryBean (in the org.springframework.beans.factory.config package) to be of use; the approach is similar to that of the ObjectFactoryCreatingFactoryBean, but it allows you to specify your own lookup interface as opposed to having to use a Spring-specific lookup interface such as the ObjectFactory. Consult the (copious) Javadocs for the ServiceLocatorFactoryBean for a full treatment of this alternative approach (that does reduce the coupling to Spring).

3.3.8.2. Arbitrary method replacement

A less commonly useful form of method injection than Lookup Method Injection is the ability to replace arbitrary methods in a managed bean with another method implementation. Users may safely skip the rest of this section (which describes this somewhat advanced feature), until this functionality is actually needed.

When using XML-based configuration metadata, the replaced-method element may be used to replace an existing method implementation with another, for a deployed bean. Consider the following class, with a method computeValue, which we want to override:

public class MyValueCalculator {

  public String computeValue(String input) {
    // some real code...
  }

  // some other methods...

}

A class implementing the org.springframework.beans.factory.support.MethodReplacer interface provides the new method definition.

/** meant to be used to override the existing computeValue(String)
    implementation in MyValueCalculator
  */
public class ReplacementComputeValue implements MethodReplacer {

    public Object reimplement(Object o, Method m, Object[] args) throws Throwable {
        // get the input value, work with it, and return a computed result
        String input = (String) args[0];
        ... 
        return ...;
    }
}

The bean definition to deploy the original class and specify the method override would look like this:

<bean id="myValueCalculator class="x.y.z.MyValueCalculator">
  <!-- arbitrary method replacement -->
  <replaced-method name="computeValue" replacer="replacementComputeValue">
    <arg-type>String</arg-type>
  </replaced-method>
</bean>

<bean id="replacementComputeValue" class="a.b.c.ReplacementComputeValue"/>

One or more contained <arg-type/> elements within the <replaced-method/> element may be used to indicate the method signature of the method being overridden. Note that the signature for the arguments is actually only needed in the case that the method is actually overloaded and there are multiple variants within the class. For convenience, the type string for an argument may be a substring of the fully qualified type name. For example, all the following would match java.lang.String.

    java.lang.String
    String
    Str

Since the number of arguments is often enough to distinguish between each possible choice, this shortcut can save a lot of typing, by allowing you to type just the shortest string that will match an argument type.

3.4. Bean scopes

When you create a bean definition what you are actually creating is a recipe for creating actual instances of the class defined by that bean definition. The idea that a bean definition is a recipe is important, because it means that, just like a class, you can potentially have many object instances created from a single recipe.

You can control not only the various dependencies and configuration values that are to be plugged into an object that is created from a particular bean definition, but also the scope of the objects created from a particular bean definition. This approach is very powerful and gives you the flexibility to choose the scope of the objects you create through configuration instead of having to 'bake in' the scope of an object at the Java class level. Beans can be defined to be deployed in one of a number of scopes: out of the box, the Spring Framework supports exactly five scopes (of which three are available only if you are using a web-aware ApplicationContext).

The scopes supported out of the box are listed below:

Table 3.4. Bean scopes

ScopeDescription

singleton

Scopes a single bean definition to a single object instance per Spring IoC container.

prototype

Scopes a single bean definition to any number of object instances.

request

Scopes a single bean definition to the lifecycle of a single HTTP request; that is each and every HTTP request will have its own instance of a bean created off the back of a single bean definition. Only valid in the context of a web-aware Spring ApplicationContext.

session

Scopes a single bean definition to the lifecycle of a HTTP Session. Only valid in the context of a web-aware Spring ApplicationContext.

global session

Scopes a single bean definition to the lifecycle of a global HTTP Session. Typically only valid when used in a portlet context. Only valid in the context of a web-aware Spring ApplicationContext.


3.4.1. The singleton scope

When a bean is a singleton, only one shared instance of the bean will be managed, and all requests for beans with an id or ids matching that bean definition will result in that one specific bean instance being returned by the Spring container.

To put it another way, when you define a bean definition and it is scoped as a singleton, then the Spring IoC container will create exactly one instance of the object defined by that bean definition. This single instance will be stored in a cache of such singleton beans, and all subsequent requests and references for that named bean will result in the cached object being returned.

Please be aware that Spring's concept of a singleton bean is quite different from the Singleton pattern as defined in the seminal Gang of Four (GoF) patterns book. The GoF Singleton hardcodes the scope of an object such that one and only one instance of a particular class will ever be created per ClassLoader. The scope of the Spring singleton is best described as per container and per bean. This means that if you define one bean for a particular class in a single Spring container, then the Spring container will create one and only one instance of the class defined by that bean definition. The singleton scope is the default scope in Spring. To define a bean as a singleton in XML, you would write configuration like so:

<bean id="accountService" class="com.foo.DefaultAccountService"/>

<!-- the following is equivalent, though redundant (singleton scope is the default); using spring-beans-2.0.dtd -->
<bean id="accountService" class="com.foo.DefaultAccountService" scope="singleton"/>

<!-- the following is equivalent and preserved for backward compatibility in spring-beans.dtd -->
<bean id="accountService" class="com.foo.DefaultAccountService" singleton="true"/>

3.4.2. The prototype scope

The non-singleton, prototype scope of bean deployment results in the creation of a new bean instance every time a request for that specific bean is made (that is, it is injected into another bean or it is requested via a programmatic getBean() method call on the container). As a rule of thumb, you should use the prototype scope for all beans that are stateful, while the singleton scope should be used for stateless beans.

The following diagram illustrates the Spring prototype scope. Please note that a DAO would not typically be configured as a prototype, since a typical DAO would not hold any conversational state; it was just easier for this author to reuse the core of the singleton diagram.

To define a bean as a prototype in XML, you would write configuration like so:

<!-- using spring-beans-2.0.dtd -->
<bean id="accountService" class="com.foo.DefaultAccountService" scope="prototype"/>

<!-- the following is equivalent and preserved for backward compatibility in spring-beans.dtd -->
<bean id="accountService" class="com.foo.DefaultAccountService" singleton="false"/>

There is one quite important thing to be aware of when deploying a bean in the prototype scope, in that the lifecycle of the bean changes slightly. Spring cannot (and hence does not) manage the complete lifecycle of a prototype bean: the container instantiates, configures, decorates and otherwise assembles a prototype object, hands it to the client and then has no further knowledge of that prototype instance. This means that while initialization lifecycle callback methods will be called on all objects regardless of scope, in the case of prototypes, any configured destruction lifecycle callbacks will not be called. It is the responsibility of the client code to clean up prototype scoped objects and release any expensive resources that the prototype bean(s) are holding onto. (One possible way to get the Spring container to release resources used by singleton-scoped beans is through the use of a custom bean post processor which would hold a reference to the beans that need to be cleaned up.)

In some respects, you can think of the Spring containers role when talking about a prototype-scoped bean as somewhat of a replacement for the Java 'new' operator. Any lifecycle aspects past that point have to be handled by the client. The lifecycle of a bean in a Spring IoC container is further described in the section entitled Section 3.5.1, “Lifecycle interfaces”.

[Note]Backwards compatibility note: specifying the lifecycle scope in XML

If you are referencing the 'spring-beans.dtd' DTD in a bean definition file(s), and you are being explicit about the lifecycle scope of your beans you must use the "singleton" attribute to express the lifecycle scope (remembering that the singleton lifecycle scope is the default). If you are referencing the 'spring-beans-2.0.dtd' DTD or the Spring 2.0 XSD schema, then you will need to use the "scope" attribute (because the "singleton" attribute was removed from the definition of the new DTD and XSD files in favour of the "scope" attribute).

To be totally clear about this, this means that if you use the "singleton" attribute in an XML bean definition then you must be referencing the 'spring-beans.dtd' DTD in that file. If you are using the "scope" attribute then you must be referencing either the 'spring-beans-2.0.dtd' DTD or the 'spring-beans-2.0.xsd' XSD in that file.

3.4.3. The other scopes

The other scopes, namely request, session, and global session are for use only in web-based applications (and can be used irrespective of which particular web application framework you are using, if indeed any). In the interest of keeping related concepts together in one place in the reference documentation, these scopes are described here.

[Note]Note

The scopes that are described in the following paragraphs are only available if you are using a web-aware Spring ApplicationContext implementation (such as XmlWebApplicationContext). If you try using these next scopes with regular Spring IoC containers such as the XmlBeanFactory or ClassPathXmlApplicationContext, you will get an IllegalStateException complaining about an unknown bean scope.

3.4.3.1. Initial web configuration

In order to effect the scoping of beans at the request, session, and global session levels (web-scoped beans), some minor initial configuration is required before you can set about defining your bean definitions. Please note that this extra setup is not required if you just want to use the 'standard' scopes (namely singleton and prototype).

Now as things stand, there are a couple of ways to effect this initial setup depending on your particular servlet environment. If you are using a Servlet 2.4+ web container, then you need only add the following ContextListener to the XML declarations in your web application's 'web.xml' file.

<web-app>
  ...
  <listener>
    <listener-class>org.springframework.web.context.request.RequestContextListener</listener-class>
  </listener>
  ...
</web-app>

If you are using an older web container (before Servlet 2.4), you will need to use a (provided) javax.servlet.Filter implementation. Find below a snippet of XML configuration that has to be included in the 'web.xml' file of your web application if you want to have access to web-scoped beans (the filter settings depend on the surrounding web application configuration and so you will have to change them as appropriate).

<web-app>
  ..
  <filter> 
    <filter-name>requestContextFilter</filter-name> 
    <filter-class>org.springframework.web.filter.RequestContextFilter</filter-class>
  </filter> 
  <filter-mapping> 
    <filter-name>requestContextFilter</filter-name> 
    <url-pattern>/*</url-pattern>
  </filter-mapping>
  ...
</web-app>

That's it. The RequestContextListener and RequestContextFilter classes both do exactly the same thing, namely bind the HTTP request object to the Thread that is servicing that request. This makes beans that are request- and session-scoped available further down the call chain.

3.4.3.2. The request scope

Consider the following bean definition:

<bean id="loginAction" class="com.foo.LoginAction" scope="request"/>

With the above bean definition in place, the Spring container will create a brand new instance of the LoginAction bean using the 'loginAction' bean definition for each and every HTTP request. That is, the 'loginAction' bean will be effectively scoped at the HTTP request level. You can change or dirty the internal state of the instance that is created as much as you want, safe in the knowledge that other requests that are also using instances created off the back of the same 'loginAction' bean definition will not be seeing these changes in state since they are particular to an individual request. When the request is finished processing, the bean that is scoped to the request will be discarded.

3.4.3.3. The session scope

Consider the following bean definition:

<bean id="userPreferences" class="com.foo.UserPreferences" scope="session"/>

With the above bean definition in place, the Spring container will create a brand new instance of the UserPreferences bean using the 'userPreferences' bean definition for the lifetime of a single HTTP Session. In other words, the 'userPreferences' bean will be effectively scoped at the HTTP Session level. Just like request-scoped beans, you can change the internal state of the instance that is created as much as you want, safe in the knowledge that other HTTP Session instances that are also using instances created off the back of the same 'userPreferences' bean definition will not be seeing these changes in state since they are particular to an individual HTTP Session. When the HTTP Session is eventually discarded, the bean that is scoped to that particular HTTP Session will also be discarded.

3.4.3.4. The global session scope

Consider the following bean definition:

<bean id="userPreferences" class="com.foo.UserPreferences" scope="globalSession"/>

The global session scope is similar to the standard HTTP Session scope (described immediately above), and really only makes sense in the context of portlet-based web applications. The portlet specification defines the notion of a global Session that is shared amongst all of the various portlets that make up a single portlet web application. Beans defined at the global session scope are scoped (or bound) to the lifetime of the global portlet Session.

Please note that if you are writing a standard Servlet-based web application and you define one or more beans as having global session scope, the standard HTTP Session scope will be used, and no error will be raised.

3.4.3.5. Scoped beans as dependencies

Being able to define a bean scoped to a HTTP request or Session (or indeed a custom scope of your own devising) is all very well, but one of the main value-adds of the Spring IoC container is that it manages not only the instantiation of your objects (beans), but also the wiring up of collaborators (or dependencies). If you want to inject a (for example) HTTP request scoped bean into another bean, you will need to inject an AOP proxy in place of the scoped bean. That is, you need to inject a proxy object that exposes the same public interface as the scoped object, but that is smart enough to be able to retrieve the real, target object from the relevant scope (for example a HTTP request) and delegate method calls onto the real object.

[Note]Note

You do not need to use the <aop:scoped-proxy/> in conjunction with beans that are scoped as singletons or prototypes. It is an error to try to create a scoped proxy for a singleton bean (and the resulting BeanCreationException will certainly set you straight in this regard).

Let's look at the configuration that is required to effect this; the configuration is not hugely complex (it takes just one line), but it is important to understand the “why” as well as the “how” behind it.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:aop="http://www.springframework.org/schema/aop"
       xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd">

    <!-- a HTTP Session-scoped bean exposed as a proxy -->
    <bean id="userPreferences" class="com.foo.UserPreferences" scope="session">
          
          <!-- this next element effects the proxying of the surrounding bean -->
          <aop:scoped-proxy/>
    </bean>
    
    <!-- a singleton-scoped bean injected with a proxy to the above bean -->
    <bean id="userService" class="com.foo.SimpleUserService">
    
        <!-- a reference to the proxied 'userPreferences' bean -->
        <property name="userPreferences" ref="userPreferences"/>

    </bean>
</beans>

To create such a proxy, you need only to insert a child <aop:scoped-proxy/> element into a scoped bean definition (you may also need the CGLIB library on your classpath so that the container can effect class-based proxying; you will also need to be using Appendix A, XML Schema-based configuration). So, just why do you need this <aop:scoped-proxy/> element in the definition of beans scoped at the request, session, globalSession and 'insert your custom scope here' level? The reason is best explained by picking apart the following bean definition (please note that the following 'userPreferences' bean definition as it stands is incomplete):

<bean id="userPreferences" class="com.foo.UserPreferences" scope="session"/>

<bean id="userManager" class="com.foo.UserManager">
    <property name="userPreferences" ref="userPreferences"/>
</bean>

From the above configuration it is evident that the singleton bean 'userManager' is being injected with a reference to the HTTP Session-scoped bean 'userPreferences'. The salient point here is that the 'userManager' bean is a singleton... it will be instantiated exactly once per container, and its dependencies (in this case only one, the 'userPreferences' bean) will also only be injected (once!). This means that the 'userManager' will (conceptually) only ever operate on the exact same 'userPreferences' object, that is the one that it was originally injected with. This is not what you want when you inject a HTTP Session-scoped bean as a dependency into a collaborating object (typically). Rather, what we do want is a single 'userManager' object, and then, for the lifetime of a HTTP Session, we want to see and use a 'userPreferences' object that is specific to said HTTP Session.

Rather what you need then is to inject some sort of object that exposes the exact same public interface as the UserPreferences class (ideally an object that is a UserPreferences instance) and that is smart enough to be able to go off and fetch the real UserPreferences object from whatever underlying scoping mechanism we have chosen (HTTP request, Session, etc.). We can then safely inject this proxy object into the 'userManager' bean, which will be blissfully unaware that the UserPreferences reference that it is holding onto is a proxy. In the case of this example, when a UserManager instance invokes a method on the dependency-injected UserPreferences object, it is really invoking a method on the proxy... the proxy will then go off and fetch the real UserPreferences object from (in this case) the HTTP Session, and delegate the method invocation onto the retrieved real UserPreferences object.

That is why you need the following, correct and complete, configuration when injecting request-, session-, and globalSession-scoped beans into collaborating objects:

<bean id="userPreferences" class="com.foo.UserPreferences" scope="session">
    <aop:scoped-proxy/>
</bean>

<bean id="userManager" class="com.foo.UserManager">
    <property name="userPreferences" ref="userPreferences"/>
</bean>
3.4.3.5.1. Choosing the type of proxy created

By default, when the Spring container is creating a proxy for a bean that is marked up with the <aop:scoped-proxy/> element, a CGLib-based class proxy will be created. This means that you will need to have the CGLib library on the classapth for your application.

You can choose to have the Spring container create 'standard' JDK interface-based proxies for such scoped beans by specifying 'false' for the value of the 'proxy-target-class' attribute of the <aop:scoped-proxy/> element. Using JDK interface-based proxies does mean that you don't need any additional libraries on your application's classpath to effect such proxying, but it does mean that the class of the scoped bean must implement at least one interface, and all of the collaborators into which the scoped bean is injected must be referencing the bean via one of its interfaces.

<!-- DefaultUserPreferences implements the UserPreferences interface -->
<bean id="userPreferences" class="com.foo.DefaultUserPreferences" scope="session">
    <aop:scoped-proxy proxy-target-class="false"/>
</bean>

<bean id="userManager" class="com.foo.UserManager">
    <property name="userPreferences" ref="userPreferences"/>
</bean>

The section entitled Section 6.6, “Proxying mechanisms” may also be of some interest with regard to understanding the nuances of choosing whether class-based or interface-based proxying is right for you.

3.4.4. Custom scopes

As of Spring 2.0, the bean scoping mechanism in Spring is extensible. This means that you are not limited to just the bean scopes that Spring provides out of the box; you can define your own scopes, or even redefine the existing scopes (although that last one would probably be considered bad practice - please note that you cannot override the built-in singleton and prototype scopes).

3.4.4.1. Creating your own custom scope

Scopes are defined by the org.springframework.beans.factory.config.Scope interface. This is the interface that you will need to implement in order to integrate your own custom scope(s) into the Spring container, and is described in detail below. You may wish to look at the Scope implementations that are supplied with the Spring Framework itself for an idea of how to go about implementing your own. The Scope JavaDoc explains the main class to implement when you need your own scope in more detail too.

The Scope interface has four methods dealing with getting objects from the scope, removing them from the scope and allowing them to be 'destroyed' if needed.

The first method should return the object from the underlying scope. The session scope implementation for example will return the session-scoped bean (and if it does not exist, return a new instance of the bean, after having bound it to the session for future reference).

Object get(String name, ObjectFactory objectFactory)

The second method should remove the object from the underlying scope. The session scope implementation for example, removes the session-scoped bean from the underlying session. The object should be returned (you are allowed to return null if the object with the specified name wasn't found)

Object remove(String name)

The third method is used to register callbacks the scope should execute when it is destroyed or when the specified object in the scope is destroyed. Please refer to the JavaDoc or a Spring scope implementation for more information on destruction callbacks.

void registerDestructionCallback(String name, Runnable destructionCallback)

The last method deals with obtaining the conversation identifier for the underlying scope. This identifier is different for each scope. For a session for example, this can be the session identifier.

String getConversationId()

SPR-2600 - TODO

3.4.4.2. Using a custom scope

After you have written and tested one or more custom Scope implementations, you then need to make the Spring container aware of your new scope(s). The central method to register a new Scope with the Spring container is declared on the ConfigurableBeanFactory interface (implemented by most of the concrete BeanFactory implementations that ship with Spring); this central method is displayed below:

void registerScope(String scopeName, Scope scope);

The first argument to the registerScope(..) method is the unique name associated with a scope; examples of such names in the Spring container itself are 'singleton' and 'prototype'. The second argument to the registerScope(..) method is an actual instance of the custom Scope implementation that you wish to register and use.

Let's assume that you have written your own custom Scope implementation, and you have registered it like so:

// note: the ThreadScope class does not ship with the Spring Framework
Scope customScope = new ThreadScope();
beanFactory.registerScope("thread", scope);

You can then create bean definitions that adhere to the scoping rules of your custom Scope like so:

<bean id="..." class="..." scope="thread"/>

If you have your own custom Scope implementation(s), you are not just limited to only programmatic registration of the custom scope(s). You can also do the Scope registration declaratively, using the CustomScopeConfigurer class.

The declarative registration of custom Scope implementations using the CustomScopeConfigurer class is shown below:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:aop="http://www.springframework.org/schema/aop"
       xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd">

    <bean class="org.springframework.beans.factory.config.CustomScopeConfigurer">
        <property name="scopes">
            <map>
                <entry key="thread">
                    <bean class="com.foo.ThreadScope"/>
                </entry>
            </map>
        </property>
    </bean>

    <bean id="bar" class="x.y.Bar" scope="thread">
        <property name="name" value="Rick"/>
        <aop:scoped-proxy/>
    </bean>

    <bean id="foo" class="x.y.Foo">
        <property name="bar" ref="bar"/>
    </bean>

</beans>

3.5. Customizing the nature of a bean

3.5.1. Lifecycle interfaces

The Spring Framework provides several marker interfaces to change the behavior of your bean in the container; they include InitializingBean and DisposableBean. Implementing these interfaces will result in the container calling afterPropertiesSet() for the former and destroy() for the latter to allow the bean to perform certain actions upon initialization and destruction.

Internally, the Spring Framework uses BeanPostProcessor implementations to process any marker interfaces it can find and call the appropriate methods. If you need custom features or other lifecycle behavior Spring doesn't offer out-of-the-box, you can implement a BeanPostProcessor yourself. More information about this can be found in the section entitled Section 3.7, “Container extension points”.

All the different lifecycle marker interfaces are described below. In one of the appendices, you can find diagram that show how Spring manages beans and how those lifecycle features change the nature of your beans and how they are managed.

3.5.1.1. Initialization callbacks

Implementing the org.springframework.beans.factory.InitializingBean interface allows a bean to perform initialization work after all necessary properties on the bean are set by the container. The InitializingBean interface specifies exactly one method:

void afterPropertiesSet() throws Exception;

Generally, the use of the InitializingBean interface can be avoided (and is discouraged since it unnecessarily couples the code to Spring). A bean definition provides support for a generic initialization method to be specified. In the case of XML-based configuration metadata, this is done using the 'init-method' attribute. For example, the following definition:

<bean id="exampleInitBean" class="examples.ExampleBean" init-method="init"/>
public class ExampleBean {
    
    public void init() {
        // do some initialization work
    }
}

Is exactly the same as...

<bean id="exampleInitBean" class="examples.AnotherExampleBean"/>
public class AnotherExampleBean implements InitializingBean {
    
    public void afterPropertiesSet() {
        // do some initialization work
    }
}

... but does not couple the code to Spring.

3.5.1.2. Destruction callbacks

Implementing the org.springframework.beans.factory.DisposableBean interface allows a bean to get a callback when the container containing it is destroyed. The DisposableBean interface specifies one method:

void destroy() throws Exception;

Generally, the use of the DisposableBean marker interface can be avoided (and is discouraged since it unnecessarily couples the code to Spring). A bean definition provides support for a generic destroy method to be specified. When using XML-based configuration metadata this is done via the 'destroy-method' attribute on the <bean/>. For example, the following definition:

<bean id="exampleInitBean" class="examples.ExampleBean" destroy-method="cleanup"/>
public class ExampleBean {

    public void cleanup() {
        // do some destruction work (like releasing pooled connections)
    }
}

Is exactly the same as...

<bean id="exampleInitBean" class="examples.AnotherExampleBean"/>
public class AnotherExampleBean implements DisposableBean {

    public void destroy() {
        // do some destruction work (like releasing pooled connections)
    }
}

... but does not couple the code to Spring.

3.5.1.2.1. Default initialization & destroy methods

When you are writing initialization and destroy method callbacks that do not use the Spring-specific InitializingBean and DisposableBean callback interfaces, one (in the experience of this author) typically finds oneself writing methods with names such as init(), initialize(), dispose(), etc. The names of such lifecycle callback methods are (hopefully!) standardized across a project so that developers on a team all use the same method names and thus ensure some level of consistency.

The Spring container can now be configured to 'look' for named initialization and destroy callback method names on every bean. This means that you as an application developer can simply write your application classes, use a convention of having an initialization callback called init(), and then (without having to configure each and every bean with, in the case of XML-based configuration, an 'init-method="init"' attribute) be safe in the knowledge that the Spring IoC container will call that method when the bean is being created (and in accordance with the standard lifecycle callback contract described previously).

Let's look at an example to make the use of this feature completely clear. For the sake of the example, let us say that one of the coding conventions on a project is that all initialization callback methods are to be named init() and that destroy callback methods are to be called destroy(). This leads to classes like so...

public class DefaultBlogService implements BlogService {

    private BlogDao blogDao;

    public void setBlogDao(BlogDao blogDao) {
        this.blogDao = blogDao;
    }

    // this is (unsurprisingly) the initialization callback method
    public void init() {
        if (this.blogDao == null) {
            throw new IllegalStateException("The [blogDao] property must be set.");
        }
    }
}
<beans default-init-method="init">

    <bean id="blogService" class="com.foo.DefaultBlogService">
        <property name="blogDao" ref="blogDao" />
    </bean>

</beans>

Notice the use of the 'default-init-method' attribute on the top-level <beans/> element. The presence of this attribute means that the Spring IoC container will recognize a method called 'init' on beans as being the initialization method callback, and when a bean is being created and assembled, if the bean's class has such a method, it will be invoked at the appropriate time.

Destroy method callbacks are configured similarly (in XML that is) using the 'default-destroy-method' attribute on the top-level <beans/> element.

The use of this feature can save you the (small) housekeeping chore of specifying an initialization and destroy method callback on each and every bean, and it is great for enforcing a consistent naming convention for initialization and destroy method callbacks (and consistency is something that should always be aimed for).

Consider the case where you have some existing beans where the underlying classes already have initialization callback methods that are named at variance with the convention. You can always override the default by specifying (in XML that is) the method name using the 'init-method' and 'destroy-method' attributes on the <bean/> element itself.

Finally, please be aware that the Spring container guarantees that a configured initialization callback is called immediately after a bean has been supplied with all of it's dependencies. This means that the initialization callback will be called on the raw bean reference, which means that any AOP interceptors or suchlike that will ultimately be applied to the bean will not yet be in place. A target bean is fully created first, then an AOP proxy (for example) with its interceptor chain is applied. Note that, if the target bean and the proxy are defined separately, your code can even interact to the raw target bean, bypassing the proxy. Hence, it would be very inconsistent to apply the interceptors to the init method, since that would couple the lifecycle of the target bean with its proxy/interceptors, and leave strange semantics when talking to the raw target bean directly.

3.5.1.2.2. Shutting down the Spring IoC container gracefully in non-web applications
[Note]Note

This next section does not apply to web applications (in case the title of this section did not make that abundantly clear). Spring's web-based ApplicationContext implementations already have code in place to handle shutting down the Spring IoC container gracefully when the relevant web application is being shutdown.

If you are using Spring's IoC container in a non-web application environment, for example in a rich client desktop environment, and you want the container to shutdown gracefully and call the relevant destroy callbacks on your singleton beans, you will need to register a shutdown hook with the JVM. This is quite easy to do (see below), and will ensure that your Spring IoC container shuts down gracefully and that all resources held by your singletons are released (of course it is still up to you to both configure the destroy callbacks for your singletons and implement such destroy callbacks correctly).

So to register a shutdown hook that enables the graceful shutdown of the relevant Spring IoC container, you simply need to call the registerShutdownHook() method that is declared on the AbstractApplicationContext class. To wit...

import org.springframework.context.support.AbstractApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

public final class Boot {

    public static void main(final String[] args) throws Exception {
        AbstractApplicationContext ctx
            = new ClassPathXmlApplicationContext(new String []{"beans.xml"});

        // add a shutdown hook for the above context... 
        ctx.registerShutdownHook();

        // app runs here...

        // main method exits, hook is called prior to the app shutting down...
    }
}

3.5.2. Knowing who you are

3.5.2.1.  BeanFactoryAware

A class which implements the org.springframework.beans.factory.BeanFactoryAware interface is provided with a reference to the BeanFactory that created it, when it is created by that BeanFactory.

public interface BeanFactoryAware {

    void setBeanFactory(BeanFactory beanFactory) throws BeansException;
}

This allows beans to manipulate the BeanFactory that created them programmatically, through the BeanFactory interface, or by casting the reference to a known subclass of this which exposes additional functionality. Primarily this would consist of programmatic retrieval of other beans. While there are cases when this capability is useful, it should generally be avoided, since it couples the code to Spring, and does not follow the Inversion of Control style, where collaborators are provided to beans as properties.

An alternative option that is equivalent in effect to the BeanFactoryAware-based approach is to use the org.springframework.beans.factory.config.ObjectFactoryCreatingFactoryBean. (It should be noted that this approach still does not reduce the coupling to Spring, but it does not violate the central principle of IoC as much as the BeanFactoryAware-based approach.)

The ObjectFactoryCreatingFactoryBean is a FactoryBean implementation that returns a reference to an object (factory) that can in turn be used to effect a bean lookup. The ObjectFactoryCreatingFactoryBean class does itself implement the BeanFactoryAware interface; what client beans are actually injected with is an instance of the ObjectFactory interface. This is a Spring-specific interface (and hence there is still no total decoupling from Spring), but clients can then use the ObjectFactory's getObject() method to effect the bean lookup (under the hood the ObjectFactory implementation instance that is returned simply delegates down to a BeanFactory to actually lookup a bean by name). All that you need to do is supply the ObjectFactoryCreatingFactoryBean with the name of the bean that is to be looked up. Let's look at an example:

package x.y;

public class NewsFeed {
    
    private String news;

    public void setNews(String news) {
        this.news = news;
    }

    public String getNews() {
        return this.toString() + ": '" + news + "'";
    }
}
package x.y;

import org.springframework.beans.factory.ObjectFactory;

public class NewsFeedManager {

    private ObjectFactory factory;

    public void setFactory(ObjectFactory factory) {
        this.factory = factory;
    }

    public void printNews() {
        // here is where the lookup is performed; note that there is no
        // need to hardcode the name of the bean that is being looked up...
        NewsFeed news = (NewsFeed) factory.getObject();
        System.out.println(news.getNews());
    }
}

Find below the XML configuration to wire together the above classes using the ObjectFactoryCreatingFactoryBean approach.

<beans>
    <bean id="newsFeedManager" class="x.y.NewsFeedManager">
        <property name="factory">
            <bean
class="org.springframework.beans.factory.config.ObjectFactoryCreatingFactoryBean">
                <property name="targetBeanName">
                    <idref local="newsFeed" />
                </property>
            </bean>
        </property>
    </bean>
    <bean id="newsFeed" class="x.y.NewsFeed" scope="prototype">
        <property name="news" value="... that's fit to print!" />
    </bean>
</beans>

And here is a small driver program to test the fact that new (prototype) instances of the newsFeed bean are actually being returned for each call to the injected ObjectFactory inside the NewsFeedManager's printNews() method.

import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import x.y.NewsFeedManager;

public class Main {

    public static void main(String[] args) throws Exception {

        ApplicationContext ctx = new ClassPathXmlApplicationContext("beans.xml");
        NewsFeedManager manager = (NewsFeedManager) ctx.getBean("newsFeedManager");
        manager.printNews();
        manager.printNews();
    }
}

The output from running the above program will look like so (results will of course vary on your machine).

x.y.NewsFeed@1292d26: '... that's fit to print!'
x.y.NewsFeed@5329c5: '... that's fit to print!'

3.5.2.2. BeanNameAware

If a bean implements the org.springframework.beans.factory.BeanNameAware interface and is deployed in a BeanFactory, the BeanFactory will call the bean through this interface to inform the bean of the id it was deployed under. The callback will be invoked after population of normal bean properties but before an initialization callback like InitializingBean's afterPropertiesSet or a custom init-method.

3.6. Bean definition inheritance

A bean definition potentially contains a large amount of configuration information, including container specific information (for example initialization method, static factory method name, and so forth) and constructor arguments and property values. A child bean definition is a bean definition that inherits configuration data from a parent definition. It is then able to override some values, or add others, as needed. Using parent and child bean definitions can potentially save a lot of typing. Effectively, this is a form of templating.

When working with a BeanFactory programmatically, child bean definitions are represented by the ChildBeanDefinition class. Most users will never work with them on this level, instead configuring bean definitions declaratively in something like the XmlBeanFactory. When using XML-based configuration metadata a child bean definition is indicated simply by using the 'parent' attribute, specifying the parent bean as the value of this attribute.

<bean id="inheritedTestBean" abstract="true"
    class="org.springframework.beans.TestBean">
  <property name="name" value="parent"/>
  <property name="age" value="1"/>
</bean>

<bean id="inheritsWithDifferentClass"
      class="org.springframework.beans.DerivedTestBean"
      parent="inheritedTestBean" init-method="initialize">
    
  <property name="name" value="override"/>
  <!-- the age property value of 1 will be inherited from  parent -->

</bean>

A child bean definition will use the bean class from the parent definition if none is specified, but can also override it. In the latter case, the child bean class must be compatible with the parent, that is it must accept the parent's property values.

A child bean definition will inherit constructor argument values, property values and method overrides from the parent, with the option to add new values. If any init-method, destroy-method and/or static factory method settings are specified, they will override the corresponding parent settings.

The remaining settings will always be taken from the child definition: depends on, autowire mode, dependency check, singleton, scope, lazy init.

Note that in the example above, we have explicitly marked the parent bean definition as abstract by using the abstract attribute. In the case that the parent definition does not specify a class, and so explicitly marking the parent bean definition as abstract is required:

<bean id="inheritedTestBeanWithoutClass" abstract="true">
    <property name="name" value="parent"/>
    <property name="age" value="1"/>
</bean>

<bean id="inheritsWithClass" class="org.springframework.beans.DerivedTestBean"
    parent="inheritedTestBeanWithoutClass" init-method="initialize">
  <property name="name" value="override"/>
  <!-- age will inherit the value of 1 from the parent bean definition-->
</bean>

The parent bean cannot get instantiated on its own since it is incomplete, and it is also explicitly marked as abstract. When a definition is defined to be abstract like this, it is usable only as a pure template bean definition that will serve as a parent definition for child definitions. Trying to use such an abstract parent bean on its own (by referring to it as a ref property of another bean, or doing an explicit getBean() call with the parent bean id), will result in an error. Similarly, the container's internal preInstantiateSingletons() method will completely ignore bean definitions which are defined as abstract.

[Note]Note

ApplicationContexts (but not BeanFactories) will by default pre-instantiate all singletons. Therefore it is important (at least for singleton beans) that if you have a (parent) bean definition which you intend to use only as a template, and this definition specifies a class, you must make sure to set the 'abstract' attribute to 'true', otherwise the application context will actually (attempt to) pre-instantiate the abstract bean.

3.7. Container extension points

The IoC component of the Spring Framework has been designed for extension. There is typically no need for an application developer to subclass any of the various BeanFactory or ApplicationContext implementation classes. The Spring IoC container can be infinitely extended by plugging in implementations of special integration interfaces. The next few sections are devoted to detailing all of these various integration interfaces.

3.7.1. Customizing beans using BeanPostProcessors

The first extension point that we will look at is the BeanPostProcessor interface. This interface defines a number of callback methods that you as an application developer can implement in order to provide your own (or override the containers default) instantiation logic, dependency-resolution logic, and so forth. If you want to do some custom logic after the Spring container has finished instantiating, configuring and otherwise initializing a bean, you can plug in one or more BeanPostProcessor implementations.

You can configure multiple BeanPostProcessors if you wish. You can control the order in which these BeanPostProcessors execute by setting the 'order' property (you can only set this property if the BeanPostProcessor implements the Ordered interface; if you write your own BeanPostProcessor you should consider implementing the Ordered interface too); consult the Javadocs for the BeanPostProcessor and Ordered interfaces for more details.

[Note]Note

BeanPostProcessors operate on bean (or object) instances; that is to say, the Spring IoC container will have instantiated a bean instance for you, and then BeanPostProcessors get a chance to do their stuff.

If you want to change the actual bean definition (that is the recipe that defines the bean), then you rather need to use a BeanFactoryPostProcessor (described below in the section entitled Section 3.7.2, “Customizing configuration metadata with BeanFactoryPostProcessors.

Also, BeanPostProcessors are scoped per-container. This is only relevant if you are using container hierarchies. If you define a BeanPostProcessor in one container, it will only do its stuff on the beans in that container. Beans that are defined in another container will not be post-processed by BeanPostProcessors in another container, even if both containers are part of the same hierarchy.

The org.springframework.beans.factory.config.BeanPostProcessor interface consists of exactly two callback methods. When such a class is registered as a post-processor with the container (see below for how this registration is effected), for each bean instance that is created by the container, the post-processor will get a callback from the container both before any container initialization methods (such as afterPropertiesSet and any declared init method) are called, and also afterwards. The post-processor is free to do what it wishes with the bean instance, including ignoring the callback completely. A bean post-processor will typically check for marker interfaces, or do something such as wrap a bean with a proxy; some of the Spring AOP infrastructure classes are implemented as bean post-processors and they do this proxy-wrapping logic.

It is important to know that a BeanFactory treats bean post-processors slightly differently than an ApplicationContext. An ApplicationContext will automatically detect any beans which are defined in the configuration metadata which is supplied to it that implement the BeanPostProcessor interface, and register them as post-processors, to be then called appropriately by the container on bean creation. Nothing else needs to be done other than deploying the post-processors in a similar fashion to any other bean. On the other hand, when using a BeanFactory implementation, bean post-processors explicitly have to be registered, with code like this:

ConfigurableBeanFactory factory = new XmlBeanFactory(...);
            
// now register any needed BeanPostProcessor instances
MyBeanPostProcessor postProcessor = new MyBeanPostProcessor();
factory.addBeanPostProcessor(postProcessor);

// now start using the factory

This explicit registration step is not convenient, and this is one of the reasons why the various ApplicationContext implementations are preferred above plain BeanFactory implementations in the vast majority of Spring-backed applications, especially when using BeanPostProcessors.

[Note]BeanPostProcessors and AOP auto-proxying

Classes that implement the BeanPostProcessor interface are special, and so they are treated differently by the container. All BeanPostProcessors and their directly referenced beans will be instantiated on startup, as part of the special startup phase of the ApplicationContext, then all those BeanPostProcessors will be registered in a sorted fashion - and applied to all further beans. Since AOP auto-proxying is implemented as a BeanPostProcessor itself, no BeanPostProcessors or directly referenced beans are eligible for auto-proxying (and thus will not have aspects 'woven' into them.

For any such bean, you should see an info log message: “Bean 'foo' is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)”.

Find below some examples of how to write, register, and use BeanPostProcessors in the context of an ApplicationContext.

3.7.1.1. Example: Hello World, BeanPostProcessor-style

This first example is hardly compelling, but serves to illustrate basic usage. All we are going to do is code a custom BeanPostProcessor implementation that simply invokes the toString() method of each bean as it is created by the container and prints the resulting string to the system console. Yes, it is not hugely useful, but serves to get the basic concepts across before we move into the second example which is actually useful.

Find below the custom BeanPostProcessor implementation class definition:

package scripting;

import org.springframework.beans.factory.config.BeanPostProcessor;
import org.springframework.beans.BeansException;

public class InstantiationTracingBeanPostProcessor implements BeanPostProcessor {

    // simply return the instantiated bean as-is
    public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
        return bean; // we could potentially return any object reference here...
    }

    public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
        System.out.println("Bean '" + beanName + "' created : " + bean.toString());
        return bean;
    }
}
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:lang="http://www.springframework.org/schema/lang"
       xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
http://www.springframework.org/schema/lang http://www.springframework.org/schema/lang/spring-lang-2.0.xsd">

    <lang:groovy id="messenger"
          script-source="classpath:org/springframework/scripting/groovy/Messenger.groovy">
        <lang:property name="message" value="Fiona Apple Is Just So Dreamy."/> 
    </lang:groovy>
    
    <!-- 
        when the above bean ('messenger') is instantiated, this custom
        BeanPostProcessor implementation will output the fact to the system console
     -->
    <bean class="scripting.InstantiationTracingBeanPostProcessor"/>

</beans>

Notice how the InstantiationTracingBeanPostProcessor is simply defined; it doesn't even have a name, and because it is a bean it can be dependency injected just like any other bean. (The above configuration also just so happens to define a bean that is backed by a Groovy script. The Spring 2.0 dynamic language support is detailed in the chapter entitled Chapter 24, Dynamic language support.)

Find below a small driver script to exercise the above code and configuration;

import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import org.springframework.scripting.Messenger;

public final class Boot {

    public static void main(final String[] args) throws Exception {
        ApplicationContext ctx = new ClassPathXmlApplicationContext("scripting/beans.xml");
        Messenger messenger = (Messenger) ctx.getBean("messenger");
        System.out.println(messenger);
    }
}

The output of executing the above program will be (something like) this:

Bean 'messenger' created : org.springframework.scripting.groovy.GroovyMessenger@272961
org.springframework.scripting.groovy.GroovyMessenger@272961

3.7.1.2. Example: The RequiredAnnotationBeanPostProcessor

Using marker interfaces or annotations in conjunction with a custom BeanPostProcessor implementation is a common means of extending the Spring IoC container. This next example is a bit of a cop-out, in that you are directed to the section entitled Section 25.3.1, “@Required which demonstrates the usage of a custom BeanPostProcessor implementation that ships with the Spring distribution which ensures that JavaBean properties on beans that are marked with an (arbitrary) annotation are actually (configured to be) dependency-injected with a value.

3.7.2. Customizing configuration metadata with BeanFactoryPostProcessors

The next extension point that we will look at is the org.springframework.beans.factory.config.BeanFactoryPostProcessor. The semantics of this interface are similar to the BeanPostProcessor, with one major difference. BeanFactoryPostProcessors operate on; that is to say, the Spring IoC container will allow BeanFactoryPostProcessors to read the configuration metadata and potentially change it before the container has actually instantied any other beans.

You can configure multiple BeanFactoryPostProcessors if you wish. You can control the order in which these BeanFactoryPostProcessors execute by setting the 'order' property (you can only set this property if the BeanFactoryPostProcessor implements the Ordered interface; if you write your own BeanFactoryPostProcessor you should consider implementing the Ordered interface too); consult the Javadocs for the BeanFactoryPostProcessor and Ordered interfaces for more details.

[Note]Note

If you want to change the actual bean instances (the objects that are created from the configuration metadata), then you rather need to use a BeanPostProcessor (described above in the section entitled Section 3.7.1, “Customizing beans using BeanPostProcessors.

Also, BeanFactoryPostProcessors are scoped per-container. This is only relevant if you are using container hierarchies. If you define a BeanFactoryPostProcessor in one container, it will only do its stuff on the bean definitions in that container. Bean definitions in another container will not be post-processed by BeanFactoryPostProcessors in another container, even if both containers are part of the same hierarchy.

A bean factory post-processor is executed manually (in the case of a BeanFactory) or automatically (in the case of an ApplicationContext) to apply changes of some sort to the configuration metadata that defines a container. Spring includes a number of pre-existing bean factory post-processors, such as PropertyResourceConfigurer and PropertyPlaceholderConfigurer, both described below, and BeanNameAutoProxyCreator, which is very useful for wrapping other beans transactionally or with any other kind of proxy, as described later in this manual. The BeanFactoryPostProcessor can be used to add custom property editors.

In a BeanFactory, the process of applying a BeanFactoryPostProcessor is manual, and will be similar to this:

XmlBeanFactory factory = new XmlBeanFactory(new FileSystemResource("beans.xml"));

// bring in some property values from a Properties file
PropertyPlaceholderConfigurer cfg = new PropertyPlaceholderConfigurer();
cfg.setLocation(new FileSystemResource("jdbc.properties"));

// now actually do the replacement
cfg.postProcessBeanFactory(factory);

This explicit registration step is not convenient, and this is one of the reasons why the various ApplicationContext implementations are preferred above plain BeanFactory implementations in the vast majority of Spring-backed applications, especially when using BeanFactoryPostProcessors.

An ApplicationContext will detect any beans which are deployed into it which implement the BeanFactoryPostProcessor interface, and automatically use them as bean factory post-processors, at the appropriate time. Nothing else needs to be done other than deploying these post-processor in a similar fashion to any other bean.

[Note]Note

Just as in the case of BeanPostProcessors, you typically don't want to have BeanFactoryPostProcessors marked as being lazily-initialized. If they are marked as such, then the Spring container will never instantiate them, and thus they won't get a chance to apply their custom logic. If you are using the 'default-lazy-init' attribute on the declaration of your <beans/> element, be sure to mark your various BeanFactoryPostProcessor bean definitions with 'lazy-init="false"'.

3.7.2.1. Example: the PropertyPlaceholderConfigurer

The PropertyPlaceholderConfigurer is used to externalize property values from a BeanFactory definition, into another separate file in the standard Java Properties format. This is useful to allow the person deploying an application to customize enviroment-specific properties (for example database URLs, usernames and passwords), without the complexity or risk of modifying the main XML definition file or files for the container.

Consider the following XML-based configuration metadata fragment, where a DataSource with placeholder values is defined. We will configure some properties from an external Properties file, and at runtime, we will apply a PropertyPlaceholderConfigurer to the metadata which will replace some properties of the datasource:

<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
    <property name="locations">
        <value>classpath:com/foo/jdbc.properties</value>
    </property>
</bean>

<bean id="dataSource" destroy-method="close"
      class="org.apache.commons.dbcp.BasicDataSource">
    <property name="driverClassName" value="${jdbc.driverClassName}"/>
    <property name="url" value="${jdbc.url}"/>
    <property name="username" value="${jdbc.username}"/>
    <property name="password" value="${jdbc.password}"/>
</bean>

The actual values come from another file in the standard Java Properties format:

jdbc.driverClassName=org.hsqldb.jdbcDriver
jdbc.url=jdbc:hsqldb:hsql://production:9002
jdbc.username=sa
jdbc.password=root

The PropertyPlaceholderConfigurer doesn't only look for properties in the Properties file you specify, but also checks against the Java System properties if it cannot find a property you are trying to use. This behavior can be customized by setting the systemPropertiesMode property of the configurer. It has three values, one to tell the configurer to always override, one to let it never override and one to let it override only if the property cannot be found in the properties file specified. Please consult the Javadoc for the PropertiesPlaceholderConfigurer for more information.

[Tip]Class name substitution

The PropertyPlaceholderConfigurer can be used to substitute class names, which is sometimes useful when you have to pick a particular implementation class at runtime. For example:

<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
    <property name="locations">
        <value>classpath:com/foo/strategy.properties</value>
    </property>
    <property name="properties">
        <value>custom.strategy.class=com.foo.DefaultStrategy</value>
    </property>
</bean>

<bean id="serviceStrategy" class="${custom.strategy.class}"/>

If the class is unable to be resolved at runtime to a valid class, resolution of the the bean will fail once it is about to be created (which is during the preInstantiateSingletons() phase of an ApplicationContext for a non-lazy-init bean.)

3.7.2.2. Example: the PropertyOverrideConfigurer

The PropertyOverrideConfigurer, another bean factory post-processor, is similar to the PropertyPlaceholderConfigurer, but in contrast to the latter, the original definitions can have default values or no values at all for bean properties. If an overriding Properties file does not have an entry for a certain bean property, the default context definition is used.

Note that the bean factory definition is not aware of being overridden, so it is not immediately obvious when looking at the XML definition file that the override configurer is being used. In case that there are multiple PropertyOverrideConfigurer instances that define different values for the same bean property, the last one will win (due to the overriding mechanism).

Properties file configuration lines are expected to be in the format:

beanName.property=value

An example properties file might look like this:

dataSource.driverClassName=com.mysql.jdbc.Driver
dataSource.url=jdbc:mysql:mydb

This example file would be usable against a container definition which contains a bean called dataSource, which has driver and url properties.

Note that compound property names are also supported, as long as every component of the path except the final property being overridden is already non-null (presumably initialized by the constructors). In this example...

foo.fred.bob.sammy=123

... the sammy property of the bob property of the fred property of the foo bean is being set to the scalar value 123.

3.7.3. Customizing instantiation logic using FactoryBeans

The org.springframework.beans.factory.FactoryBean interface is to be implemented by objects that are themselves factories.

The FactoryBean interface is a point of pluggability into the Spring IoC containers instantiation logic. If you have some complex initialization code that is better expressed in Java as opposed to a (potentially) verbose amount of XML, you can create your own FactoryBean, write the complex initialization inside that class, and then plug your custom FactoryBean into the container.

The FactoryBean interface provides three methods:

  • Object getObject(): has to return an instance of the object this factory creates. The instance can possibly be shared (depending on whether this factory returns singletons or prototypes).

  • boolean isSingleton(): has to return true if this FactoryBean returns singletons, false otherwise

  • Class getObjectType(): has to return either the object type returned by the getObject() method or null if the type isn't known in advance

The FactoryBean concept and interface is used in a number of places within the Spring Framework; at the time of writing there are over 50 implementations of the FactoryBean interface that ship with Spring itself.

Finally, there is sometimes a need to ask a container for an actual FactoryBean instance itself, not the bean it produces. This may be achieved by prepending the bean id with '&' (sans quotes) when calling the getBean method of the BeanFactory (including ApplicationContext). So for a given FactoryBean with an id of myBean, invoking getBean("myBean") on the container will return the product of the FactoryBean, but invoking getBean("&myBean") will return the FactoryBean instance itself.

3.8. The ApplicationContext

While the beans package provides basic functionality for managing and manipulating beans, often in a programmatic way, the context package adds ApplicationContext, which enhances BeanFactory functionality in a more framework-oriented style. Many users will use ApplicationContext in a completely declarative fashion, not even having to create it manually, but instead relying on support classes such as ContextLoader to automatically start an ApplicationContext as part of the normal startup process of a J2EE web-app. Of course, it is still possible to programmatically create an ApplicationContext.

The basis for the context package is the ApplicationContext interface, located in the org.springframework.context package. Deriving from the BeanFactory interface, it provides all the functionality of BeanFactory. To allow working in a more framework-oriented fashion, using layering and hierarchical contexts, the context package also provides the following functionality:

  • MessageSource, providing access to messages in i18n-style

  • Access to resources, such as URLs and files

  • Event propagation to beans implementing the ApplicationListener interface

  • Loading of multiple (hierarchical) contexts, allowing each to be focused on one particular layer, for example the web layer of an application

As the ApplicationContext includes all functionality of the BeanFactory, it is generally recommended that it be used over the BeanFactory, except for a few limited situations such as perhaps in an Applet, where memory consumption might be critical, and a few extra kilobytes might make a difference. The following sections describe functionality that ApplicationContext adds to basic BeanFactory capabilities.

3.8.1. Internationalization using MessageSources

The ApplicationContext interface extends an interface called MessageSource, and therefore provides messaging (i18n or internationalization) functionality. Together with the HierarchicalMessageSource, capable of resolving hierarchical messages, these are the basic interfaces Spring provides to do message resolution. Let's quickly review the methods defined there:

  • String getMessage(String code, Object[] args, String default, Locale loc): the basic method used to retrieve a message from the MessageSource. When no message is found for the specified locale, the default message is used. Any arguments passed in are used as replacement values, using the MessageFormat functionality provided by the standard library.

  • String getMessage(String code, Object[] args, Locale loc): essentially the same as the previous method, but with one difference: no default message can be specified; if the message cannot be found, a NoSuchMessageException is thrown.

  • String getMessage(MessageSourceResolvable resolvable, Locale locale): all properties used in the methods above are also wrapped in a class named MessageSourceResolvable, which you can use via this method.

When an ApplicationContext gets loaded, it automatically searches for a MessageSource bean defined in the context. The bean has to have the name messageSource. If such a bean is found, all calls to the methods described above will be delegated to the message source that was found. If no message source was found, the ApplicationContext attempts to see if it has a parent containing a bean with the same name. If so, it uses that bean as the MessageSource. If it can't find any source for messages, an empty StaticMessageSource will be instantiated in order to be able to accept calls to the methods defined above.

Spring currently provides two MessageSource implementations. These are the ResourceBundleMessageSource and the StaticMessageSource. Both implement NestingMessageSource in order to do nested messaging. The StaticMessageSource is hardly ever used but provides programmatic ways to add messages to the source. The ResourceBundleMessageSource is more interesting and is the one we will provide an example for:

<beans>
  <bean id="messageSource"
        class="org.springframework.context.support.ResourceBundleMessageSource">
    <property name="basenames">
      <list>
        <value>format</value>
        <value>exceptions</value>
        <value>windows</value>
      </list>
    </property>
  </bean>
</beans>

This assumes you have three resource bundles defined on your classpath called format, exceptions and windows. Using the JDK standard way of resolving messages through ResourceBundles, any request to resolve a message will be handled. For the purposes of the example, lets assume the contents of two of the above resource bundle files are...

# in 'format.properties'
message=Alligators rock!
# in 'exceptions.properties'
argument.required=The '{0}' argument is required.

Some (admittedly trivial) driver code to exercise the MessageSource functionality can be found below. Remember that all ApplicationContext implementations are also MessageSource implementations and so can be cast to the MessageSource interface.

public static void main(String[] args) {
    MessageSource resources = new ClassPathXmlApplicationContext("beans.xml");
    String message = resources.getMessage("message", null, "Default", null);
    System.out.println(message);
}

The resulting output from the above program will be...

Alligators rock!

So to summarize, the MessageSource is defined in a file called 'beans.xml' (this file exists at the root of your classpath). The 'messageSource' bean definition refers to a number of resource bundles via it's basenames property; the three files that are passed in the list to the basenames property exist as files at the root of your classpath (and are called format.properties, exceptions.properties, and windows.properties respectively).

Lets look at another example, and this time we will look at passing arguments to the message lookup; these arguments will be converted into strings and inserted into placeholders in the lookup message. This is perhaps best explained with an example:

<beans>

    <!-- this MessageSource is being used in a web application -->
    <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource">
        <property name="baseName" value="WEB-INF/test-messages"/>
    </bean>
    
    <!-- let's inject the above MessageSource into this POJO -->
    <bean id="example" class="com.foo.Example">
        <property name="messages" ref="messageSource"/>
    </bean>

</beans>
public class Example {

    private MessageSource messages;

    public void setMessages(MessageSource messages) {
        this.messages = messages;
    }

    public void execute() {
        String message = this.messages.getMessage("argument.required",
            new Object [] {"userDao"}, "Required", null);
        System.out.println(message);
    }

}

The resulting output from the invocation of the execute() method will be...

The 'userDao' argument is required.

With regard to internationalization (i18n), Spring's various MessageResource implementations follow the same locale resolution and fallback rules as the standard JDK ResourceBundle. In short, and continuing with the example 'messageSource' defined previously, if you want to resolve messages against the British (en-GB) locale, you would create files called format_en_GB.properties, exceptions_en_GB.properties, and windows_en_GB.properties respectively.

Locale resolution is typically going to be managed by the surrounding environment of the application. For the purpose of this example though, we'll just manually specify the locale that we want to resolve our (British) messages against.

# in 'exceptions_en_GB.properties'
argument.required=Ebagum lad, the '{0}' argument is required, I say, required.
public static void main(final String[] args) {
    MessageSource resources = new ClassPathXmlApplicationContext("beans.xml");
    String message = resources.getMessage("argument.required",
        new Object [] {"userDao"}, "Required", Locale.UK);
    System.out.println(message);
}

The resulting output from the running of the above program will be...

Ebagum lad, the 'userDao' argument is required, I say, required.

The MessageSourceAware interface can also be used to acquire a reference to any MessageSource that has been defined. Any bean that is defined in an ApplicationContext that implements the MessageSourceAware interface will be injected with the application context's MessageSource when it (the bean) is being created and configured.

3.8.2. Events

Event handling in the ApplicationContext is provided through the ApplicationEvent class and ApplicationListener interface. If a bean which implements the ApplicationListener interface is deployed into the context, every time an ApplicationEvent gets published to the ApplicationContext, that bean will be notified. Essentially, this is the standard Observer design pattern. Spring provides three standard events:

Table 3.5. Built-in Events

EventExplanation
ContextRefreshedEvent

Published when the ApplicationContext is initialized or refreshed. Initialized here means that all beans are loaded, singletons are pre-instantiated and the ApplicationContext is ready for use.

ContextClosedEvent

Published when the ApplicationContext is closed, using the close() method on the ApplicationContext. Closed here means that singleton beans (only!) are destroyed.

RequestHandledEvent

A web-specific event telling all beans that a HTTP request has been serviced (this will be published after the request has been finished). Note that this event is only applicable for web applications using Spring's DispatcherServlet.


Implementing custom events can be done as well. Simply call the publishEvent() method on the ApplicationContext, specifying a parameter which is an instance of your custom event class implementing ApplicationEvent. Event listeners receive events synchronously. This means the publishEvent() method blocks until all listeners have finished processing the event (it is possible to supply an alternate event publishing strategy via a ApplicationEventMulticaster implementation). Furthermore, when a listener receives an event it operates inside the transaction context of the publisher, if a transaction context is available.

Let's look at an example. First, the ApplicationContext:

<bean id="emailer" class="example.EmailBean">
  <property name="blackList">
    <list>
      <value>black@list.org</value>
      <value>white@list.org</value>
      <value>john@doe.org</value>
    </list>
  </property>
</bean>

<bean id="blackListListener" class="example.BlackListNotifier">
  <property name="notificationAddress" value="spam@list.org"/>
</bean>

Now, let's look at the actual classes:

public class EmailBean implements ApplicationContextAware {

    private List blackList;
	private ApplicationContext ctx;

    public void setBlackList(List blackList) {
        this.blackList = blackList;
    }

    public void setApplicationContext(ApplicationContext ctx) {
        this.ctx = ctx;
    }

    public void sendEmail(String address, String text) {
        if (blackList.contains(address)) {
            BlackListEvent evt = new BlackListEvent(address, text);
            ctx.publishEvent(evt);
            return;
        }
        // send email...
    }
}
public class BlackListNotifier implements ApplicationListener {

    private String notificationAddress;
    
    public void setNotificationAddress(String notificationAddress) {
        this.notificationAddress = notificationAddress;
    }

    public void onApplicationEvent(ApplicationEvent evt) {
        if (evt instanceof BlackListEvent) {
            // notify appropriate person...
        }
    }
}

Of course, this particular example could probably be implemented in better ways (perhaps by using AOP features), but it should be sufficient to illustrate the basic event mechanism.

3.8.3. Convenient access to low-level resources

For optimal usage and understanding of application contexts, users should generally familiarize themselves with Spring's Resource abstraction, as described in the chapter entitled Chapter 4, Resources.

An application context is a ResourceLoader, able to be used to load Resources. A Resource is essentially a java.net.URL on steroids (in fact, it just wraps and uses a URL where appropriate), which can be used to obtain low-level resources from almost any location in a transparent fashion, including from the classpath, a filesystem location, anywhere describable with a standard URL, and some other variations. If the resource location string is a simple path without any special prefixes, where those resources come from is specific and appropriate to the actual application context type.

A bean deployed into the application context may implement the special marker interface, ResourceLoaderAware, to be automatically called back at initialization time with the application context itself passed in as the ResourceLoader.

A bean may also expose properties of type Resource, to be used to access static resources, and expect that they will be injected into it like any other properties. The person deploying the bean may specify those Resource properties as simple String paths, and rely on a special JavaBean PropertyEditor that is automatically registered by the context, to convert those text strings to actual Resource objects.

The location path or paths supplied to an ApplicationContext constructor are actually resource strings, and in simple form are treated appropriately to the specific context implementation ( ClassPathXmlApplicationContext treats a simple location path as a classpath location), but may also be used with special prefixes to force loading of definitions from the classpath or a URL, regardless of the actual context type.

3.8.4. Convenient ApplicationContext instantiation for web applications

As opposed to the BeanFactory, which will often be created programmatically, ApplicationContext instances can be created declaratively using for example a ContextLoader. Of course you can also create ApplicationContext instances programmatically using one of the ApplicationContext implementations. First, let's examine the ContextLoader interface and its implementations.

The ContextLoader interface has two implementations: the ContextLoaderListener and the ContextLoaderServlet. They both have the same functionality but differ in that the listener version cannot be used in Servlet 2.2 compatible containers. Since the Servlet 2.4 specification, servlet context listeners are required to execute immediately after the servlet context for the web application has been created and is available to service the first request (and also when the servlet context is about to be shut down): as such a servlet context listener is an ideal place to initialize the Spring ApplicationContext. It is up to you as to which one you use, but all things being equal you should probably prefer ContextLoaderListener; for more information on compatibility, have a look at the Javadoc for the ContextLoaderServlet.

You can register an ApplicationContext using the ContextLoaderListener as follows:

<context-param>
  <param-name>contextConfigLocation</param-name>
  <param-value>/WEB-INF/daoContext.xml /WEB-INF/applicationContext.xml</param-value>
</context-param>

<listener>
  <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
</listener>

<!-- or use the ContextLoaderServlet instead of the above listener
<servlet>
  <servlet-name>context</servlet-name>
  <servlet-class>org.springframework.web.context.ContextLoaderServlet</servlet-class>
  <load-on-startup>1</load-on-startup>
</servlet>
-->

The listener inspects the contextConfigLocation parameter. If it doesn't exist, it'll use /WEB-INF/applicationContext.xml as a default. When it does exist, it'll separate the String using predefined delimiters (comma, semi-colon and whitespace) and use the values as locations where application contexts will be searched for. The ContextLoaderServlet can be used instead of the ContextLoaderListener. The servlet will use the 'contextConfigLocation' parameter just as the listener does.

3.9. Glue code and the evil singleton

The majority of the code inside an application is best written in a DI style, where that code is served out of a Spring IoC container, has its own dependencies supplied by the container when it is created, and is completely unaware of the container. However, for the small glue layers of code that are sometimes needed to tie other code together, there is sometimes a need for singleton (or quasi-singleton) style access to a Spring IoC container. For example, third party code may try to construct new objects directly (Class.forName() style), without the ability to force it to get these objects out of a Spring IoC container. If the object constructed by the third party code is just a small stub or proxy, which then uses a singleton style access to a Spring IoC container to get a real object to delegate to, then inversion of control has still been achieved for the majority of the code (the object coming out of the container); thus most code is still unaware of the container or how it is accessed, and remains uncoupled from other code, with all ensuing benefits. EJBs may also use this stub/proxy approach to delegate to a plain Java implementation object, coming out of a Spring IoC container. While the Spring IoC container itself ideally does not have to be a singleton, it may be unrealistic in terms of memory usage or initialization times (when using beans in the Spring IoC container such as a Hibernate SessionFactory) for each bean to use its own, non-singleton Spring IoC container.

As another example, in a complex J2EE apps with multiple layers (various JAR files, EJBs, and WAR files packaged as an EAR), with each layer having its own Spring IoC container definition (effectively forming a hierarchy), the preferred approach when there is only one web-app (WAR) in the top hierarchy is to simply create one composite Spring IoC container from the multiple XML definition files from each layer. All of the various Spring IoC container implementations may be constructed from multiple definition files in this fashion. However, if there are multiple sibling web-applications at the root of the hierarchy, it is problematic to create a Spring IoC container for each web-application which consists of mostly identical bean definitions from lower layers, as there may be issues due to increased memory usage, issues with creating multiple copies of beans which take a long time to initialize (e.g. a Hibernate SessionFactory), and possible issues due to side-effects. As an alternative, classes such as ContextSingletonBeanFactoryLocator or SingletonBeanFactoryLocator may be used to demand-load multiple hierarchical (that is one container is the parent of another) Spring IoC container instances in a singleton fashion, which may then be used as the parents of the web-application Spring IoC container instances. The result is that bean definitions for lower layers are loaded only as needed, and loaded only once.

3.9.1. Using the Singleton-helper classes

You can see a detailed example of their usage in SingletonBeanFactoryLocator and ContextSingletonBeanFactoryLocator by viewing their respective Javadocs.

As mentioned in the chapter on EJBs, the Spring convenience base classes for EJBs normally use a non-singleton BeanFactoryLocator implementation, which is easily replaced by the use of SingletonBeanFactoryLocator and ContextSingletonBeanFactoryLocator if there is a need.



[1] See the section entitled Background

Chapter 4. Resources

4.1. Introduction

Java's standard java.net.URL class and standard handlers for various URL prefixes unfortunately are not quite adequate enough for all access to low-level resources. For example, there is no standardized URL implementation that may be used to access a resource that needs to be obtained from the classpath, or relative to a ServletContext. While it is possible to register new handlers for specialized URL prefixes (similar to existing handlers for prefixes such as http:), this is generally quite complicated, and the URL interface still lacks some desirable functionality, such as a method to check for the existence of the resource being pointed to.

4.2. The Resource interface

Spring's Resource interface is meant to be a more capable interface for abstracting access to low-level resources.

public interface Resource extends InputStreamSource {

    boolean exists();

    boolean isOpen();

    URL getURL() throws IOException;

    File getFile() throws IOException;

    Resource createRelative(String relativePath) throws IOException;

    String getFilename();

    String getDescription();
}
public interface InputStreamSource {

    InputStream getInputStream() throws IOException;
}

Some of the most important methods from the Resource interface are:

  • getInputStream(): locates and opens the resource, returning an InputStream for reading from the resource. It is expected that each invocation returns a fresh InputStream. It is the responsibility of the caller to close the stream.

  • exists(): returns a boolean indicating whether this resource actually exists in physical form.

  • isOpen(): returns a boolean indicating whether this resource represents a handle with an open stream. If true, the InputStream cannot be read multiple times, and must be read once only and then closed to avoid resource leaks. Will be false for all usual resource implementations, with the exception of InputStreamResource.

  • getDescription(): returns a description for this resource, to be used for error output when working with the resource. This is often the fully qualified file name or the actual URL of the resource.

Other methods allow you to obtain an actual URL or File object representing the resource (if the underlying implementation is compatible, and supports that functionality).

The Resource abstraction is used extensively in Spring itself, as an argument type in many method signatures when a resource is needed. Other methods in some Spring APIs (such as the constructors to various ApplicationContext implementations), take a String which in unadorned or simple form is used to create a Resource appropriate to that context implementation, or via special prefixes on the String path, allow the caller to specify that a specific Resource implementation must be created and used.

While the Resource interface is used a lot with Spring and by Spring, it's actually very useful to use as a general utility class by itself in your own code, for access to resources, even when your code doesn't know or care about any other parts of Spring. While this couples your code to Spring, it really only couples it to this small set of utility classes, which are serving as a more capable replacement for URL, and can be considered equivalent to any other library you would use for this purpose.

It is important to note that the Resource abstraction does not replace functionality: it wraps it where possible. For example, a UrlResource wraps a URL, and uses the wrapped URL to do it's work.

4.3. Built-in Resource implementations

There are a number of Resource implementations that come supplied straight out of the box in Spring:

4.3.1. UrlResource

The UrlResource wraps a java.net.URL, and may be used to access any object that is normally accessible via a URL, such as files, an HTTP target, an FTP target, etc. All URLs have a standardized String representation, such that appropriate standardized prefixes are used to indicate one URL type from another. This includes file: for accessing filesystem paths, http: for accessing resources via the HTTP protocol, ftp: for accessing resources via FTP, etc.

A UrlResource is created by Java code explicitly using the UrlResource constructor, but will often be created implicitly when you call an API method which takes a String argument which is meant to represent a path. For the latter case, a JavaBeans PropertyEditor will ultimately decide which type of Resource to create. If the path string contains a few well-known (to it, that is) prefixes such as classpath:, it will create an appropriate specialized Resource for that prefix. However, if it doesn't recognize the prefix, it will assume the this is just a standard URL string, and will create a UrlResource.

4.3.2. ClassPathResource

This class represents a resource which should be obtained from the classpath. This uses either the thread context class loader, a given class loader, or a given class for loading resources.

This Resource implementation supports resolution as java.io.File if the class path resource resides in the file system, but not for classpath resources which reside in a jar and have not been expanded (by the servlet engine, or whatever the environment is) to the filesystem. To address this the various Resource implementations always support resolution as a java.net.URL.

A ClassPathResource is created by Java code explicitly using the ClassPathResource constructor, but will often be created implicitly when you call an API method which takes a String argument which is meant to represent a path. For the latter case, a JavaBeans PropertyEditor will recognize the special prefix classpath:on the string path, and create a ClassPathResource in that case.

4.3.3. FileSystemResource

This is a Resource implementation for java.io.File handles. It obviously supports resolution as a File, and as a URL.

4.3.4. ServletContextResource

This is a Resource implementation for ServletContext resources, interpreting relative paths within the relevant web application's root directory.

This always supports stream access and URL access, but only allows java.io.File access when the web application archive is expanded and the resource is physically on the filesystem. Whether or not it's expanded and on the filesystem like this, or accessed directly from the JAR or somewhere else like a DB (it's conceivable) is actually dependent on the Servlet container.

4.3.5. InputStreamResource

A Resource implementation for a given InputStream. This should only be used if no specific Resource implementation is applicable. In particular, prefer ByteArrayResource or any of the file-based Resource implementations where possible.

In contrast to other Resource implementations, this is a descriptor for an already opened resource - therefore returning true from isOpen(). Do not use it if you need to keep the resource descriptor somewhere, or if you need to read a stream multiple times.

4.3.6. ByteArrayResource

This is a Resource implementation for a given byte array. It creates a ByteArrayInputStream for the given byte array.

It's useful for loading content from any given byte array, without having to resort to a single-use InputStreamResource.

4.4. The ResourceLoader

The ResourceLoader interface is meant to be implemented by objects that can return (i.e. load) Resource instances.

public interface ResourceLoader {
    Resource getResource(String location);
}

All application contexts implement the ResourceLoader interface, and therefore all application contexts may be used to obtain Resource instances.

When you call getResource() on a specific application context, and the location path specified doesn't have a specific prefix, you will get back a Resource type that is appropriate to that particular application context. For example, assume the following snippet of code was executed against a ClassPathXmlApplicationContext instance:

Resource template = ctx.getResource("some/resource/path/myTemplate.txt);

What would be returned would be a ClassPathResource; if the same method was executed against a FileSystemXmlApplicationContext instance, you'd get back a FileSystemResource. For a WebApplicationContext, you'd get back a ServletContextResource, and so on.

As such, you can load resources in a fashion appropriate to the particular application context.

On the other hand, you may also force ClassPathResource to be used, regardless of the application context type, by specifying the special classpath: prefix:

Resource template = ctx.getResource("classpath:some/resource/path/myTemplate.txt);

Similarly, one can force a UrlResource to be used by specifying any of the standard java.net.URL prefixes:

Resource template = ctx.getResource("file:/some/resource/path/myTemplate.txt);
Resource template = ctx.getResource("http://myhost.com/resource/path/myTemplate.txt);

The following table summarizes the strategy for converting Strings to Resources:

Table 4.1. Resource strings

PrefixExampleExplanation

classpath:

classpath:com/myapp/config.xml

Loaded from the classpath.

file:

file:/data/config.xml

Loaded as a URL, from the filesystem. [a]

http:

http://myserver/logo.png

Loaded as a URL.

(none)

/data/config.xml

Depends on the underlying ApplicationContext.

[a] But see also the section entitled Section 4.7.3, “FileSystemResource caveats”.


4.5. The ResourceLoaderAware interface

The ResourceLoaderAware interface is a special marker interface, identifying objects that expect to be provided with a ResourceLoader reference.

public interface ResourceLoaderAware {

   void setResourceLoader(ResourceLoader resourceLoader);
}

When a class implements ResourceLoaderAware and is deployed into an application context (as a Spring-managed bean), it is recognized as ResourceLoaderAware by the application context. The application context will then invoke the setResourceLoader(ResourceLoader), supplying itself as the argument (remember, all application contexts in Spring implement the ResourceLoader interface).

Of course, since an ApplicationContext is a ResourceLoader, the bean could also implement the ApplicationContextAware interface and use the supplied application context directly to load resources, but in general, it's better to use the specialized ResourceLoader interface if that's all that's needed. The code would just be coupled to the resource loading interface, which can be considered a utility interface, and not the whole Spring ApplicationContext interface.

4.6. Resources as dependencies

If the bean itself is going to determine and supply the resource path through some sort of dynamic process, it probably makes sense for the bean to use the ResourceLoader interface to load resources. Consider as an example the loading of a template of some sort, where the specific resource that is needed depends on the role of the user. If the resources are static, it makes sense to eliminate the use of the ResourceLoader interface completely, and just have the bean expose the Resource properties it needs, and expect that they will be injected into it.

What makes it trivial to then inject these properties, is that all application contexts register and use a special JavaBeans PropertyEditor which can convert String paths to Resource objects. So if myBean has a template property of type Resource, it can be configured with a simple string for that resource, as follows:

<bean id="myBean" class="...">
  <property name="template" value="some/resource/path/myTemplate.txt"/>
</bean>

Note that the resource path has no prefix, so because the application context itself is going to be used as the ResourceLoader, the resource itself will be loaded via a ClassPathResource, FileSystemResource, or ServletContextResource (as appropriate) depending on the exact type of the context.

If there is a need to force a specific Resource type to be used, then a prefix may be used. The following two examples show how to force a ClassPathResource and a UrlResource (the latter being used to access a filesystem file).

<property name="template" value="classpath:some/resource/path/myTemplate.txt">
<property name="template" value="file:/some/resource/path/myTemplate.txt"/>

4.7. Application contexts and Resource paths

4.7.1. Constructing application contexts

An application context constructor (for a specific application context type) generally takes a string or array of strings as the location path(s) of the resource(s) such as XML files that make up the definition of the context.

When such a location path doesn't have a prefix, the specific Resource type built from that path and used to load the bean definitions, depends on and is appropriate to the specific application context. For example, if you create a ClassPathXmlApplicationContext as follows:

ApplicationContext ctx = new ClassPathXmlApplicationContext("conf/appContext.xml");

The bean definitions will be loaded from the classpath, as a ClassPathResource will be used. But if you create a FileSystemXmlApplicationContext as follows:

ApplicationContext ctx =
    new FileSystemXmlApplicationContext("conf/appContext.xml");

The bean definition will be loaded from a filesystem location, in this case relative to the current working directory.

Note that the use of the special classpath prefix or a standard URL prefix on the location path will override the default type of Resource created to load the definition. So this FileSystemXmlApplicationContext...

ApplicationContext ctx =
    new FileSystemXmlApplicationContext("classpath:conf/appContext.xml");

... will actually load it's bean definitions from the classpath. However, it is still a FileSystemXmlApplicationContext. If it is subsequently used as a ResourceLoader, any unprefixed paths will still be treated as filesystem paths.

4.7.1.1. Constructing ClassPathXmlApplicationContext instances - shortcuts

The ClassPathXmlApplicationContext exposes a number of constructors to enable convenient instantiation. The basic idea is that one supplies merely a string array containing just the filenames of the XML files themselves (without the leading path information), and one also supplies a Class; the ClassPathXmlApplicationContext will derive the path information from the supplied class.

An example will hopefully make this clear. Consider a directory layout that looks like this:

com/
  foo/
    services.xml
    daos.xml
    MessengerService.class

A ClassPathXmlApplicationContext instance composed of the beans defined in the 'services.xml' and 'daos.xml' could be instantiated like so...

ApplicationContext ctx = new ClassPathXmlApplicationContext(
    new String[] {"services.xml", "daos.xml"}, MessengerService.class);

Please do consult the Javadocs for the ClassPathXmlApplicationContext class for details of the various constructors.

4.7.2. Wildcards in application context constructor resource paths

The resource paths in application context constructor values may be a simple path (as shown above) which has a one-to-one mapping to a target Resource, or alternately may contain the special "classpath*:" prefix and/or internal Ant-style regular expressions (matched using Spring's PathMatcher utility). Both of the latter are effectively wildcards

One use for this mechanism is when doing component-style application assembly. All components can 'publish' context definition fragments to a well-known location path, and when the final application context is created using the same path prefixed via classpath*:, all component fragments will be picked up automatically.

Note that this wildcarding is specific to use of resource paths in application context constructors (or when using the PathMatcher utility class hierarchy directly), and is resolved at construction time. It has nothing to do with the Resource type itself. It's not possible to use the classpath*: prefix to construct an actual Resource, as a resource points to just one resource at a time.

4.7.2.1. Ant-style Patterns

When the path location contains an Ant-style pattern, for example:

     /WEB-INF/*-context.xml
     com/mycompany/**/applicationContext.xml
     file:C:/some/path/*-context.xml
     classpath:com/mycompany/**/applicationContext.xml

... the resolver follows a more complex but defined procedure to try to resolve the wildcard. It produces a Resource for the path up to the last non-wildcard segment and obtains a URL from it. If this URL is not a "jar:" URL or container-specific variant (e.g. "zip:" in WebLogic, "wsjar" in WebSphere, etc.), then a java.io.File is obtained from it, and used to resolve the wildcard by walking the filesystem. In the case of a jar URL, the resolver either gets a java.net.JarURLConnection from it, or manually parse the jar URL, and then traverse the contents of the jar file, to resolve the wildcards.

4.7.2.1.1. Implications on portability

If the specified path is already a file URL (either explicitly, or implicitly because the base ResourceLoader is a filesystem one, then wildcarding is guaranteed to work in a completely portable fashion.

If the specified path is a classpath location, then the resolver must obtain the last non-wildcard path segment URL via a Classloader.getResource() call. Since this is just a node of the path (not the file at the end) it is actually undefined (in the ClassLoader Javadocs) exactly what sort of a URL is returned in this case. In practice, it is always a java.io.File representing the directory, where the classpath resource resolves to a filesystem location, or a jar URL of some sort, where the classpath resource resolves to a jar location. Still, there is a portability concern on this operation.

If a jar URL is obtained for the last non-wildcard segment, the resolver must be able to get a java.net.JarURLConnection from it, or manually parse the jar URL, to be able to walk the contents of the jar, and resolve the wildcard. This will work in most environments, but will fail in others, and it is strongly recommended that the wildcard resolution of resources coming from jars be thoroughly tested in your specific environment before you rely on it.

4.7.2.2. The classpath*: prefix

When constructing an XML-based application context, a location string may use the special classpath*: prefix:

ApplicationContext ctx =
    new ClassPathXmlApplicationContext("classpath*:conf/appContext.xml");

This special prefix specifies that all classpath resources that match the given name must be obtained (internally, this essentially happens via a ClassLoader.getResources(...) call), and then merged to form the final application context definition.

[Note]Classpath*: portability

The wildcard classpath relies on the getResources() method of the underlying classloader. As most application servers nowadays supply their own classloader implementation, the behavior might differ especially when dealing with jar files. A simple test to check if classpath* works is to use the classloader to load a file from within a jar on the classpath: getClass().getClassLoader().getResources("<someFileInsideTheJar>"). Try this test with files that have the same name but are placed inside two different locations. In case an inappropriate result is returned, check the application server documentation for settings that might affect the classloader behavior.

The "classpath*:" prefix can also be combined with a PathMatcher pattern in the rest of the location path, for example "classpath*:META-INF/*-beans.xml". In this case, the resolution strategy is fairly simple: a ClassLoader.getResources() call is used on the last non-wildcard path segment to get all the matching resources in the class loader hierarchy, and then off each resource the same PathMatcher resoltion strategy described above is used for the wildcard subpath.

4.7.2.3. Other notes relating to wildcards

Please note that "classpath*:" when combined with Ant-style patterns will only work reliably with at least one root directory before the pattern starts, unless the actual target files reside in the file system. This means that a pattern like "classpath*:*.xml" will not retrieve files from the root of jar files but rather only from the root of expanded directories. This originates from a limitation in the JDK's ClassLoader.getResources() method which only returns file system locations for a passed-in empty string (indicating potential roots to search).

Ant-style patterns with "classpath:" resources are not guaranteed to find matching resources if the root package to search is available in multiple class path locations. This is because a resource such as

    com/mycompany/package1/service-context.xml

may be in only one location, but when a path such as

    classpath:com/mycompany/**/service-context.xml

is used to try to resolve it, the resolver will work off the (first) URL returned by getResource("com/mycompany");. If this base package node exists in multiple classloader locations, the actual end resource may not be underneath. Therefore, preferably, use "classpath*:" with the same Ant-style pattern in such a case, which will search all class path locations that contain the root package.

4.7.3. FileSystemResource caveats

A FileSystemResource that is not attached to a FileSystemApplicationContext (that is, a FileSystemApplicationContext is not the actual ResourceLoader) will treat absolute vs. relative paths as you would expect. Relative paths are relative to the current working directory, while absolute paths are relative to the root of the filesystem.

For backwards compatibility (historical) reasons however, this changes when the FileSystemApplicationContext is the ResourceLoader. The FileSystemApplicationContext simply forces all attached FileSystemResource instances to treat all location paths as relative, whether they start with a leading slash or not. In practice, this means the following are equivalent:

ApplicationContext ctx =
    new FileSystemXmlApplicationContext("conf/context.xml");
ApplicationContext ctx =
    new FileSystemXmlApplicationContext("/conf/context.xml");

As are the following: (Even though it would make sense for them to be different, as one case is relative and the other absolute.)

FileSystemXmlApplicationContext ctx = ...;
ctx.getResource("some/resource/path/myTemplate.txt");
FileSystemXmlApplicationContext ctx = ...;
ctx.getResource("/some/resource/path/myTemplate.txt");

In practice, if true absolute filesystem paths are needed, it is better to forgo the use of absolute paths with FileSystemResource / FileSystemXmlApplicationContext, and just force the use of a UrlResource, by using the file: URL prefix.

// actual context type doesn't matter, the Resource will always be UrlResource
ctx.getResource("file:/some/resource/path/myTemplate.txt");
// force this FileSystemXmlApplicationContext to load it's definition via a UrlResource
ApplicationContext ctx =
    new FileSystemXmlApplicationContext("file:/conf/context.xml");

Chapter 5. Validation, Data-binding, the BeanWrapper, and PropertyEditors

5.1. Introduction

There are pros and cons for considering validation as business logic, and Spring offers a design for validation (and data binding) that does not exclude either one of them. Specifically validation should not be tied to the web tier, should be easy to localize and it should be possible to plug in any validator available. Considering the above, Spring has come up with a Validator interface that is both basic and eminently usable in every layer of an application.

Data binding is useful for allowing user input to be dynamically bound to the domain model of an application (or whatever objects you use to process user input). Spring provides the so-called DataBinder to do exactly that. The Validator and the DataBinder make up the validation package, which is primarily used in but not limited to the MVC framework.

The BeanWrapper is a fundamental concept in the Spring Framework and is used in a lot of places. However, you probably will not ever have the need to use the BeanWrapper directly. Because this is reference documentation however, we felt that some explanation might be in order. We're explaining the BeanWrapper in this chapter since if you were going to use it at all, you would probably do so when trying to bind data to objects, which is strongly related to the BeanWrapper.

Spring uses PropertyEditors all over the place. The concept of a PropertyEditor is part of the JavaBeans specification. Just as the BeanWrapper, it's best to explain the use of PropertyEditors in this chapter as well, since it's closely related to the BeanWrapper and the DataBinder.

5.2. Validation using Spring's Validator interface

Spring's features a Validator interface that you can use to validate objects. The Validator interface works using an Errors object so that while validating, validators can report validation failures to the Errors object.

Let's consider a small data object:

public class Person {

  private String name;
  private int age;

  // the usual getters and setters...
}

We're going to provide validation behavior for the Person class by implementing the following two methods of the org.springframework.validation.Validator interface:

  • supports(Class) - Can this Validator validate instances of the supplied Class?

  • validate(Object, org.springframework.validation.Errors) - validates the given object and in case of validation errors, registers those with the given Errors object

Implementing a Validator is fairly straightforward, especially when you know of the ValidationUtils helper class that the Spring Framework also provides.

public class PersonValidator implements Validator {
    
    /**
    * This Validator validates just Person instances
    */
    public boolean supports(Class clazz) {
        return Person.class.equals(clazz);
    }
    
    public void validate(Object obj, Errors e) {
        ValidationUtils.rejectIfEmpty(e, "name", "name.empty");
        Person p = (Person) obj;
        if (p.getAge() < 0) {
            e.rejectValue("age", "negativevalue");
        } else if (p.getAge() > 110) {
            e.rejectValue("age", "too.darn.old");
        }
    }
}

As you can see, the static rejectIfEmpty(..) method on the ValidationUtils class is used to reject the 'name' property if it is null or the empty string. Have a look at the Javadoc for the ValidationUtils class to see what functionality it provides besides the example shown previously.

While it is certainly possible to implement a single Validator class to validate each of the nested objects in a rich object, it may be better to encapsulate the validation logic for each nested class of object in its own Validator implementation. A simple example of a 'rich' object would be a Customer that is composed of two String properties (a first and second name) and a complex Address object. Address objects may be used independently of Customer objects, and so a distinct AddressValidator has been implemented. If you want your CustomerValidator to reuse the logic contained within the AddressValidator class without recourse to copy-n-paste you can dependency-inject or instantiate an AddressValidator within your CustomerValidator, and use it like so:

public class CustomerValidator implements Validator {

   private final Validator addressValidator;

   public CustomerValidator(Validator addressValidator) {
      if (addressValidator == null) {
          throw new IllegalArgumentException("The supplied [Validator] is required and must not be null.");
      }
      if (!addressValidator.supports(Address.class)) {
          throw new IllegalArgumentException(
            "The supplied [Validator] must support the validation of [Address] instances.");
      }
      this.addressValidator = addressValidator;
   }

    /**
    * This Validator validates Customer instances, and any subclasses of Customer too
    */
   public boolean supports(Class clazz) {
      return Customer.class.isAssignableFrom(clazz);
   }

   public void validate(Object target, Errors errors) {
      ValidationUtils.rejectIfEmptyOrWhitespace(errors, "firstName", "field.required");
      ValidationUtils.rejectIfEmptyOrWhitespace(errors, "surname", "field.required");
      Customer customer = (Customer) target;
      try {
          errors.pushNestedPath("address");
          ValidationUtils.invokeValidator(this.addressValidator, customer.getAddress(), errors);
      } finally {
          errors.popNestedPath();
      }
   }
}

Validation errors are reported to the Errors object passed to the validator. In case of Spring Web MVC you can use <spring:bind/> tag to inspect the error messages, but of course you can also inspect the errors object yourself. More information about the methods it offers can be found from the Javadoc.

5.3. Resolving codes to error messages

We've talked about databinding and validation. Outputting messages corresponding to validation errors is the last thing we need to discuss. In the example we've shown above, we rejected the name and the age field. If we're going to output the error messages by using a MessageSource, we will do so using the error code we've given when rejecting the field ('name' and 'age' in this case). When you call (either directly, or indirectly, using for example the ValidationUtils class) rejectValue or one of the other reject methods from the Errors interface, the underlying implementation will not only register the code you've passed in, but also a number of additional error codes. What error codes it registers is determined by the MessageCodesResolver that is used. By default, the DefaultMessageCodesResolver is used, which for example not only registers a message with the code you gave, but also messages that include the field name you passed to the reject method. So in case you reject a field using rejectValue("age", "too.darn.old"), apart from the too.darn.old code, Spring will also register too.darn.old.age and too.darn.old.age.int (so the first will include the field name and the second will include the type of the field); this is done as a convenience to aid developers in targeting error messages and suchlike.

More information on the MessageCodesResolver and the default strategy can be found online with the Javadocs for MessageCodesResolver and DefaultMessageCodesResolver respectively.

5.4. Bean manipulation and the BeanWrapper

The org.springframework.beans package adheres to the JavaBeans standard provided by Sun. A JavaBean is simply a class with a default no-argument constructor, which follows a naming conventions where a property named bingoMadness has a setter setBingoMadness(..) and a getter getBingoMadness(). For more information about JavaBeans and the specification, please refer to Sun's website (java.sun.com/products/javabeans).

One quite important concept of the beans package is the BeanWrapper interface and its corresponding implementation (BeanWrapperImpl). As quoted from the Javadoc, the BeanWrapper offers functionality to set and get property values (individually or in bulk), get property descriptors, and to query properties to determine if they are readable or writable. Also, the BeanWrapper offers support for nested properties, enabling the setting of properties on sub-properties to an unlimited depth. Then, the BeanWrapper supports the ability to add standard JavaBeans PropertyChangeListeners and VetoableChangeListeners, without the need for supporting code in the target class. Last but not least, the BeanWrapper provides support for the setting of indexed properties. The BeanWrapper usually isn't used by application code directly, but by the DataBinder and the BeanFactory.

The way the BeanWrapper works is partly indicated by its name: it wraps a bean to perform actions on that bean, like setting and retrieving properties.

5.4.1. Setting and getting basic and nested properties

Setting and getting properties is done using the setPropertyValue(s) and getPropertyValue(s) methods that both come with a couple of overloaded variants. They're all described in more detail in the Javadoc Spring comes with. What's important to know is that there are a couple of conventions for indicating properties of an object. A couple of examples:

Table 5.1. Examples of properties

ExpressionExplanation
nameIndicates the property name corresponding to the methods getName() or isName() and setName(..)
account.nameIndicates the nested property name of the property account corresponding e.g. to the methods getAccount().setName() or getAccount().getName()
account[2]Indicates the third element of the indexed property account. Indexed properties can be of type array, list or other naturally ordered collection
account[COMPANYNAME]Indicates the value of the map entry indexed by the key COMPANYNAME of the Map property account

Below you'll find some examples of working with the BeanWrapper to get and set properties.

(This next section is not vitally important to you if you're not planning to work with the BeanWrapper directly. If you're just using the DataBinder and the BeanFactory and their out-of-the-box implementation, you should skip ahead to the section about PropertyEditors.)

Consider the following two classes:

public class Company {
    private String name;
    private Employee managingDirector;

    public String getName()	{ 
        return this.name; 
    }
    public void setName(String name) { 
        this.name = name; 
    } 
    public Employee getManagingDirector() { 
        return this.managingDirector; 
    }
    public void setManagingDirector(Employee managingDirector) {
        this.managingDirector = managingDirector;
    }
}
public class Employee {
    private String name;
    private float salary;

    public String getName()	{
        return this.name;
    }
    public void setName(String name) {
        this.name = name;
    }
    public float getSalary() {
        return salary;
    }
    public void setSalary(float salary) {
        this.salary = salary;
    }
}

The following code snippets show some examples of how to retrieve and manipulate some of the properties of instantiated Companies and Employees:

BeanWrapper company = BeanWrapperImpl(new Company());
// setting the company name..
company.setPropertyValue("name", "Some Company Inc.");
// ... can also be done like this:
PropertyValue value = new PropertyValue("name", "Some Company Inc.");
company.setPropertyValue(value);

// ok, let's create the director and tie it to the company:
BeanWrapper jim = BeanWrapperImpl(new Employee());
jim.setPropertyValue("name", "Jim Stravinsky");
company.setPropertyValue("managingDirector", jim.getWrappedInstance());

// retrieving the salary of the managingDirector through the company
Float salary = (Float) company.getPropertyValue("managingDirector.salary");

5.4.2. Built-in PropertyEditor implementations

Spring heavily uses the concept of PropertyEditors. Sometimes it might be handy to be able to represent properties in a different way than the object itself. For example, a date can be represented in a human readable way, while we're still able to convert the human readable form back to the original date (or even better: convert any date entered in a human readable form, back to Date objects). This behavior can be achieved by registering custom editors, of type java.beans.PropertyEditor. Registering custom editors on a BeanWrapper or alternately in a specific IoC container as mentioned in the previous chapter, gives it the knowledge of how to convert properties to the desired type. Read more about PropertyEditors in the Javadoc of the java.beans package provided by Sun.

A couple of examples where property editing is used in Spring

  • setting properties on beans is done using PropertyEditors. When mentioning java.lang.String as the value of a property of some bean you're declaring in XML file, Spring will (if the setter of the corresponding property has a Class-parameter) use the ClassEditor to try to resolve the parameter to a Class object

  • parsing HTTP request parameters in Spring's MVC framework is done using all kinds of PropertyEditors that you can manually bind in all subclasses of the CommandController

Spring has a number of built-in PropertyEditors to make life easy. Each of those is listed below and they are all located in the org.springframework.beans.propertyeditors package. Most, but not all (as indicated below), are registered by default by BeanWrapperImpl. Where the property editor is configurable in some fashion, you can of course still register your own variant to override the default one:

Table 5.2. Built-in PropertyEditors

ClassExplanation
ByteArrayPropertyEditorEditor for byte arrays. Strings will simply be converted to their corresponding byte representations. Registered by default by BeanWrapperImpl.
ClassEditorParses Strings representing classes to actual classes and the other way around. When a class is not found, an IllegalArgumentException is thrown. Registered by default by BeanWrapperImpl.
CustomBooleanEditorCustomizable property editor for Boolean properties. Registered by default by BeanWrapperImpl, but, can be overridden by registering custom instance of it as custom editor.
CustomCollectionEditorProperty editor for Collections, converting any source Collection to a given target Collection type.
CustomDateEditorCustomizable property editor for java.util.Date, supporting a custom DateFormat. NOT registered by default. Must be user registered as needed with appropriate format.
CustomNumberEditorCustomizable property editor for any Number subclass like Integer, Long, Float, Double. Registered by default by BeanWrapperImpl, but can be overridden by registering custom instance of it as a custom editor.
FileEditorCapable of resolving Strings to java.io.File objects. Registered by default by BeanWrapperImpl.
InputStreamEditorOne-way property editor, capable of taking a text string and producing (via an intermediate ResourceEditor and Resource) an InputStream, so InputStream properties may be directly set as Strings. Note that the default usage will not close the InputStream for you! Registered by default by BeanWrapperImpl.
LocaleEditorCapable of resolving Strings to Locale objects and vice versa (the String format is [language]_[country]_[variant], which is the same thing the toString() method of Locale provides). Registered by default by BeanWrapperImpl.
PatternEditorCapable of resolving Strings to JDK 1.5 Pattern objects and vice versa.
PropertiesEditorCapable of converting Strings (formatted using the format as defined in the Javadoc for the java.lang.Properties class) to Properties objects. Registered by default by BeanWrapperImpl.
StringTrimmerEditorProperty editor that trims Strings. Optionally allows transforming an empty string into a null value. NOT registered by default; must be user registered as needed.
URLEditorCapable of resolving a String representation of a URL to an actual URL object. Registered by default by BeanWrapperImpl.

Spring uses the java.beans.PropertyEditorManager to set the search path for property editors that might be needed. The search path also includes sun.bean.editors, which includes PropertyEditor implementations for types such as Font, Color, and most of the primitive types. Note also that the standard JavaBeans infrastructure will automatically discover PropertyEditor classes (without you having to register them explicitly) if they are in the same package as the class they handle, and have the same name as that class, with 'Editor' appended; for example, one could have the following class and package structure, which would be sufficient for the FooEditor class to be recognized and used as the PropertyEditor for Foo-typed properties.

com
  chank
    pop
      Foo
      FooEditor   // the PropertyEditor for the Foo class

Note that you can also use the standard BeanInfo JavaBeans mechanism here as well (described in not-amazing-detail here). Find below an example of using the BeanInfo mechanism for explicitly registering one or more PropertyEditor instances with the properties of an associated class.

com
  chank
    pop
      Foo
      FooBeanInfo   // the BeanInfo for the Foo class

Here is the Java source code for the referenced FooBeanInfo class. This would associate a CustomNumberEditor with the age property of the Foo class.

public class FooBeanInfo extends SimpleBeanInfo {
      
    public PropertyDescriptor[] getPropertyDescriptors() {
        try {
            final PropertyEditor numberPE = new CustomNumberEditor(Integer.class, true);
            PropertyDescriptor ageDescriptor = new PropertyDescriptor("age", Foo.class) {
                public PropertyEditor createPropertyEditor(Object bean) {
                    return numberPE;
                };
            };
            return new PropertyDescriptor[] { ageDescriptor };
        }
        catch (IntrospectionException ex) {
            throw new Error(ex.toString());
        }
    }
}

5.4.2.1. Registering additional custom PropertyEditors

When setting bean properties as a string value, a Spring IoC container ultimately uses standard JavaBeans PropertyEditors to convert these Strings to the complex type of the property. Spring pre-registers a number of custom PropertyEditors (for example, to convert a classname expressed as a string into a real Class object). Additionally, Java's standard JavaBeans PropertyEditor lookup mechanism allows a PropertyEditor for a class simply to be named appropriately and placed in the same package as the class it provides support for, to be found automatically.

If there is a need to register other custom PropertyEditors, there are several mechanisms available. The most manual approach, which is not normally convenient or recommended, is to simply use the registerCustomEditor() method of the ConfigurableBeanFactory interface, assuming you have a BeanFactory reference. The more convenient mechanism is to use a special bean factory post-processor called CustomEditorConfigurer. Although bean factory post-processors can be used semi-manually with BeanFactory implementations, this one has a nested property setup, so it is strongly recommended that it is used with the ApplicationContext, where it may be deployed in similar fashion to any other bean, and automatically detected and applied.

Note that all bean factories and application contexts automatically use a number of built-in property editors, through their use of something called a BeanWrapper to handle property conversions. The standard property editors that the BeanWrapper registers are listed in the previous section. Additionally, ApplicationContexts also override or add an additional number of editors to handle resource lookups in a manner appropriate to the specific application context type.

Standard JavaBeans PropertyEditor instances are used to convert property values expressed as strings to the actual complex type of the property. CustomEditorConfigurer, a bean factory post-processor, may be used to conveniently add support for additional PropertyEditor instances to an ApplicationContext.

Consider a user class ExoticType, and another class DependsOnExoticType which needs ExoticType set as a property:

package example;
		
public class ExoticType {

    private String name;

    public ExoticType(String name) {
        this.name = name;
    }
}

public class DependsOnExoticType { 
   
    private ExoticType type;

    public void setType(ExoticType type) {
        this.type = type;
    }
}

When things are properly set up, we want to be able to assign the type property as a string, which a PropertyEditor will behind the scenes convert into a real ExoticType object:

<bean id="sample" class="example.DependsOnExoticType">
    <property name="type" value="aNameForExoticType"/>
</bean>

The PropertyEditor implementation could look similar to this:

// converts string representation to ExoticType object
package example;

public class ExoticTypeEditor extends PropertyEditorSupport {

    private String format;

    public void setFormat(String format) {
        this.format = format;
    }
    
    public void setAsText(String text) {
        if (format != null && format.equals("upperCase")) {
            text = text.toUpperCase();
        }
        ExoticType type = new ExoticType(text);
        setValue(type);
    }
}

Finally, we use CustomEditorConfigurer to register the new PropertyEditor with the ApplicationContext, which will then be able to use it as needed:

<bean id="customEditorConfigurer" 
    class="org.springframework.beans.factory.config.CustomEditorConfigurer">
  <property name="customEditors">
    <map>
      <entry key="example.ExoticType">
        <bean class="example.ExoticTypeEditor">
          <property name="format" value="upperCase"/>
        </bean>
      </entry>
    </map>
  </property>
</bean>

Chapter 6. Aspect Oriented Programming with Spring

6.1. Introduction

Aspect-Oriented Programming (AOP) complements Object-Oriented Programming (OOP) by providing another way of thinking about program structure. In addition to classes, AOP gives you aspects. Aspects enable modularization of concerns such as transaction management that cut across multiple types and objects. (Such concerns are often termed crosscutting concerns.)

One of the key components of Spring is the AOP framework. While the Spring IoC container does not depend on AOP, meaning you don't need to use AOP if you don't want to, AOP complements Spring IoC to provide a very capable middleware solution.

AOP is used in the Spring Framework:

  • To provide declarative enterprise services, especially as a replacement for EJB declarative services. The most important such service is declarative transaction management, which builds on the Spring Framework's transaction abstraction.

  • To allow users to implement custom aspects, complementing their use of OOP with AOP.

If you are interested only in generic declarative services or other pre-packaged declarative middleware services such as pooling, you don't need to work directly with Spring AOP, and can skip most of this chapter.

6.1.1. AOP concepts

Let us begin by defining some central AOP concepts. These terms are not Spring-specific. Unfortunately, AOP terminology is not particularly intuitive; however, it would be even more confusing if Spring used its own terminology.

  • Aspect: A modularization of a concern that cuts across multiple objects. Transaction management is a good example of a crosscutting concern in J2EE applications. In Spring AOP, aspects are implemented using regular classes (the schema-based approach) or regular classes annotated with the @Aspect annotation (@AspectJ style).

  • Join point: A point during the execution of a program, such as the execution of a method or the handling of an exception. In Spring AOP, a join point always represents a method execution. Join point information is available in advice bodies by declaring a parameter of type org.aspectj.lang.JoinPoint.

  • Advice: Action taken by an aspect at a particular join point. Different types of advice include "around," "before" and "after" advice. Advice types are discussed below. Many AOP frameworks, including Spring, model an advice as an interceptor, maintaining a chain of interceptors "around" the join point.

  • Pointcut: A predicate that matches join points. Advice is associated with a pointcut expression and runs at any join point matched by the pointcut (for example, the execution of a method with a certain name). The concept of join points as matched by pointcut expressions is central to AOP: Spring uses the AspectJ pointcut language by default.

  • Introduction: (Also known as an inter-type declaration). Declaring additional methods or fields on behalf of a type. Spring AOP allows you to introduce new interfaces (and a corresponding implementation) to any proxied object. For example, you could use an introduction to make a bean implement an IsModified interface, to simplify caching.

  • Target object: Object being advised by one or more aspects. Also referred to as the advised object. Since Spring AOP is implemented using runtime proxies, this object will always be a proxied object.

  • AOP proxy: An object created by the AOP framework in order to implement the aspect contracts (advise method executions and so on). In the Spring Framework, an AOP proxy will be a JDK dynamic proxy or a CGLIB proxy. Proxy creation is transparent to users of the schema-based and @AspectJ styles of aspect declaration introduced in Spring 2.0.

  • Weaving: Linking aspects with other application types or objects to create an advised object. This can be done at compile time (using the AspectJ compiler, for example), load time, or at runtime. Spring AOP, like other pure Java AOP frameworks, performs weaving at runtime.

Types of advice:

  • Before advice: Advice that executes before a join point, but which does not have the ability to prevent execution flow proceeding to the join point (unless it throws an exception).

  • After returning advice: Advice to be executed after a join point completes normally: for example, if a method returns without throwing an exception.

  • After throwing advice: Advice to be executed if a method exits by throwing an exception.

  • After (finally) advice: Advice to be executed regardless of the means by which a join point exits (normal or exceptional return).

  • Around advice: Advice that surrounds a join point such as a method invocation. This is the most powerful kind of advice. Around advice can perform custom behavior before and after the method invocation. It is also responsible for choosing whether to proceed to the join point or to shortcut the advised method execution by returning its own return value or throwing an exception.

Around advice is the most general kind of advice. Since Spring AOP, like AspectJ, provides a full range of advice types, we recommend that you use the least powerful advice type that can implement the required behavior. For example, if you need only to update a cache with the return value of a method, you are better off implementing an after returning advice than an around advice, although an around advice can accomplish the same thing. Using the most specific advice type provides a simpler programming model with less potential for errors. For example, you do not need to invoke the proceed() method on the JoinPoint used for around advice, and hence cannot fail to invoke it.

In Spring 2.0, all advice parameters are statically typed, so that you work with advice parameters of the appropriate type (the type of the return value from a method execution for example) rather than Object arrays.

The concept of join points, matched by pointcuts, is the key to AOP which distinguishes it from older technologies offering only interception. Pointcuts enable advice to be targeted independently of the Object-Oriented hierarchy. For example, an around advice providing declarative transaction management can be applied to a set of methods spanning multiple objects (such as all business operations in the service layer).

6.1.2. Spring AOP capabilities and goals

Spring AOP is implemented in pure Java. There is no need for a special compilation process. Spring AOP does not need to control the class loader hierarchy, and is thus suitable for use in a J2EE web container or application server.

Spring AOP currently supports only method execution join points (advising the execution of methods on Spring beans). Field interception is not implemented, although support for field interception could be added without breaking the core Spring AOP APIs. If you need to advise field access and update join points, consider a language such as AspectJ.

Spring AOP's approach to AOP differs from that of most other AOP frameworks. The aim is not to provide the most complete AOP implementation (although Spring AOP is quite capable); it is rather to provide a close integration between AOP implementation and Spring IoC to help solve common problems in enterprise applications.

Thus, for example, the Spring Framework's AOP functionality is normally used in conjunction with the Spring IoC container. Aspects are configured using normal bean definition syntax (although this allows powerful "autoproxying" capabilities): this is a crucial difference from other AOP implementations. There are some things you cannot do easily or efficiently with Spring AOP, such as advise very fine-grained objects: AspectJ is the best choice in such cases. However, our experience is that Spring AOP provides an excellent solution to most problems in J2EE applications that are amenable to AOP.

Spring AOP will never strive to compete with AspectJ to provide a comprehensive AOP solution. We believe that both proxy-based frameworks like Spring AOP and full-blown frameworks such as AspectJ are valuable, and that they are complementary, rather than in competition. Spring 2.0 seamlessly integrates Spring AOP and IoC with AspectJ, to enable all uses of AOP to be catered for within a consistent Spring-based application architecture. This integration does not affect the Spring AOP API or the AOP Alliance API: Spring AOP remains backward-compatible. See the following chapter for a discussion of the Spring AOP APIs.

[Note]Note

One of the central tenets of the Spring Framework is that of non-invasiveness; this is the idea that you should not be forced to introduce framework-specific classes and interfaces into your business/domain model. However, in some places the Spring Framework does give you the option to introduce Spring Framework-specific dependencies into your codebase: the rationale in giving you such options is because in certain scenarios it might be just plain easier to read or code some specific piece of functionality in such a way. The Spring Framework (almost) always offers you the choice though: you have the freedom to make an informed decision as to which option best suits your particular use case or scenario.

One such choice that is relevant to this chapter is that of which AOP framework (and which AOP style) to choose. You have the choice of AspectJ and/or Spring AOP, and you also have the choice of either the @AspectJ annotation-style approach or the Spring XML configuration-style approach. The fact that this chapter chooses to introduce the @AspectJ-style approach first should not be taken as an indication that the Spring team favors the @AspectJ annotation-style approach over the Spring XML configuration-style.

See the section entitled Section 6.4, “Choosing which AOP declaration style to use” for a fuller discussion of the whys and wherefores of each style.

6.1.3. AOP Proxies

Spring AOP defaults to using standard J2SE dynamic proxies for AOP proxies. This enables any interface (or set of interfaces) to be proxied.

Spring AOP can also use CGLIB proxies. This is necessary to proxy classes, rather than interfaces. CGLIB is used by default if a business object does not implement an interface. As it is good practice to program to interfaces rather than classes, business classes normally will implement one or more business interfaces. It is possible to force the use of CGLIB, in those (hopefully rare) cases where you need to advise a method that is not declared on an interface, or where you need to pass a proxied object to a method as a concrete type.

It is important to grasp the fact that Spring AOP is proxy-based. See the section entitled Section 6.6.1, “Understanding AOP proxies” for a thorough examination of exactly what this implementation detail actually means.

6.2. @AspectJ support

@AspectJ refers to a style of declaring aspects as regular Java classes annotated with Java 5 annotations. The @AspectJ style was introduced by the AspectJ project as part of the AspectJ 5 release. Spring 2.0 interprets the same annotations as AspectJ 5, using a library supplied by AspectJ for pointcut parsing and matching. The AOP runtime is still pure Spring AOP though, and there is no dependency on the AspectJ compiler or weaver.

Using the AspectJ compiler and weaver enables use of the full AspectJ language, and is discussed in Section 6.8, “Using AspectJ with Spring applications”.

6.2.1. Enabling @AspectJ Support

To use @AspectJ aspects in a Spring configuration you need to enable Spring support for configuring Spring AOP based on @AspectJ aspects, and autoproxying beans based on whether or not they are advised by those aspects. By autoproxying we mean that if Spring determines that a bean is advised by one or more aspects, it will automatically generate a proxy for that bean to intercept method invocations and ensure that advice is executed as needed.

The @AspectJ support is enabled by including the following element inside your spring configuration:

<aop:aspectj-autoproxy/>

This assumes that you are using schema support as described in Appendix A, XML Schema-based configuration. See Section A.2.6, “The aop schema” for how to import the tags in the aop namespace.

If you are using the DTD, it is still possible to enable @AspectJ support by adding the following definition to your application context:

<bean class="org.springframework.aop.aspectj.annotation.AnnotationAwareAspectJAutoProxyCreator" />

You will also need two AspectJ libraries on the classpath of your application: aspectjweaver.jar and aspectjrt.jar. These libraries are available in the 'lib' directory of an AspectJ installation (version 1.5.1 or later required), or in the 'lib/aspectj' directory of the Spring-with-dependencies distribution.

6.2.2. Declaring an aspect

With the @AspectJ support enabled, any bean defined in your application context with a class that is an @AspectJ aspect (has the @Aspect annotation) will be automatically detected by Spring and used to configure Spring AOP. The following example shows the minimal definition required for a not-very-useful aspect:

A regular bean definition in the application context, pointing to a bean class that has the @Aspect annotation:

<bean id="myAspect" class="org.xyz.NotVeryUsefulAspect">
   <!-- configure properties of aspect here as normal -->
</bean>

And the NotVeryUsefulAspect class definition, annotated with org.aspectj.lang.annotation.Aspect annotation;

package org.xyz;
import org.aspectj.lang.annotation.Aspect;

@Aspect
public class NotVeryUsefulAspect {

}

Aspects (classes annotated with @Aspect) may have methods and fields just like any other class. They may also contain pointcut, advice, and introduction (inter-type) declarations.

[Note]Advising aspects

In Spring AOP, it is not possible to have aspects themselves be the target of advice from other aspects. The @Aspect annotation on a class marks it as an aspect, and hence excludes it from auto-proxying.

6.2.3. Declaring a pointcut

Recall that pointcuts determine join points of interest, and thus enable us to control when advice executes. Spring AOP only supports method execution join points for Spring beans, so you can think of a pointcut as matching the execution of methods on Spring beans. A pointcut declaration has two parts: a signature comprising a name and any parameters, and a pointcut expression that determines exactly which method executions we are interested in. In the @AspectJ annotation-style of AOP, a pointcut signature is provided by a regular method definition, and the pointcut expression is indicated using the @Pointcut annotation (the method serving as the pointcut signature must have a void return type).

An example will help make this distinction between a pointcut signature and a pointcut expression clear. The following example defines a pointcut named 'anyOldTransfer' that will match the execution of any method named 'transfer':

@Pointcut("execution(* transfer(..))")// the pointcut expression
private void anyOldTransfer() {}// the pointcut signature

The pointcut expression that forms the value of the @Pointcut annotation is a regular AspectJ 5 pointcut expression. For a full discussion of AspectJ's pointcut language, see the AspectJ Programming Guide (and for Java 5 based extensions, the AspectJ 5 Developers Notebook) or one of the books on AspectJ such as “Eclipse AspectJ” by Colyer et. al. or “AspectJ in Action” by Ramnivas Laddad.

6.2.3.1. Supported Pointcut Designators

Spring AOP supports the following AspectJ pointcut designators for use in pointcut expressions:

  • execution - for matching method execution join points, this is the primary pointcut designator you will use when working with Spring AOP

  • within - limits matching to join points within certain types (simply the execution of a method declared within a matching type when using Spring AOP)

  • this - limits matching to join points (the execution of methods when using Spring AOP) where the bean reference (Spring AOP proxy) is an instance of the given type

  • target - limits matching to join points (the execution of methods when using Spring AOP) where the target object (application object being proxied) is an instance of the given type

  • args - limits matching to join points (the execution of methods when using Spring AOP) where the arguments are instances of the given types

  • @target - limits matching to join points (the execution of methods when using Spring AOP) where the class of the executing object has an annotation of the given type

  • @args - limits matching to join points (the execution of methods when using Spring AOP) where the runtime type of the actual arguments passed have annotations of the given type(s)

  • @within - limits matching to join points within types that have the given annotation (the execution of methods declared in types with the given annotation when using Spring AOP)

  • @annotation - limits matching to join points where the subject of the join point (method being executed in Spring AOP) has the given annotation

Because Spring AOP limits matching to only method execution join points, the discussion of the pointcut designators above gives a narrower definition than you will find in the AspectJ programming guide. In addition, AspectJ itself has type-based semantics and at an execution join point both 'this' and 'target' refer to the same object - the object executing the method. Spring AOP is a proxy based system and differentiates between the proxy object itself (bound to 'this') and the target object behind the proxy (bound to 'target').

6.2.3.2. Combining pointcut expressions

Pointcut expressions can be combined using '&&', '||' and '!'. It is also possible to refer to pointcut expressions by name. The following example shows three pointcut expressions: anyPublicOperation (which matches if a method execution join point represents the execution of any public method); inTrading (which matches if a method execution is in the trading module), and tradingOperation (which matches if a method execution represents any public method in the trading module).

    @Pointcut("execution(public * *(..))")
    private void anyPublicOperation() {}
    
    @Pointcut("within(com.xyz.someapp.trading..*")
    private void inTrading() {}
    
    @Pointcut("anyPublicOperation() && inTrading()")
    private void tradingOperation() {}

It is a best practice to build more complex pointcut expressions out of smaller named components as shown above. When referring to pointcuts by name, normal Java visibility rules apply (you can see private pointcuts in the same type, protected pointcuts in the hierarchy, public pointcuts anywhere and so on). Visibility does not affect pointcut matching.

6.2.3.3. Sharing common pointcut definitions

When working with enterprise applications, you often want to refer to modules of the application and particular sets of operations from within several aspects. We recommend defining a "SystemArchitecture" aspect that captures common pointcut expressions for this purpose. A typical such aspect would look as follows:

package com.xyz.someapp;

import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;

@Aspect
public class SystemArchitecture {

  /**
   * A join point is in the web layer if the method is defined
   * in a type in the com.xyz.someapp.web package or any sub-package
   * under that.
   */
  @Pointcut("within(com.xyz.someapp.web..*)")
  public void inWebLayer() {}

  /**
   * A join point is in the service layer if the method is defined
   * in a type in the com.xyz.someapp.service package or any sub-package
   * under that.
   */
  @Pointcut("within(com.xyz.someapp.service..*)")
  public void inServiceLayer() {}

  /**
   * A join point is in the data access layer if the method is defined
   * in a type in the com.xyz.someapp.dao package or any sub-package
   * under that.
   */
  @Pointcut("within(com.xyz.someapp.dao..*)")
  public void inDataAccessLayer() {}

  /**
   * A business service is the execution of any method defined on a service
   * interface. This definition assumes that interfaces are placed in the
   * "service" package, and that implementation types are in sub-packages.
   * 
   * If you group service interfaces by functional area (for example, 
   * in packages com.xyz.someapp.abc.service and com.xyz.def.service) then
   * the pointcut expression "execution(* com.xyz.someapp..service.*.*(..))"
   * could be used instead.
   */
  @Pointcut("execution(* com.xyz.someapp.service.*.*(..))")
  public void businessService() {}
  
  /**
   * A data access operation is the execution of any method defined on a 
   * dao interface. This definition assumes that interfaces are placed in the
   * "dao" package, and that implementation types are in sub-packages.
   */
  @Pointcut("execution(* com.xyz.someapp.dao.*.*(..))")
  public void dataAccessOperation() {}

}

The pointcuts defined in such an aspect can be referred to anywhere that you need a pointcut expression. For example, to make the service layer transactional, you could write:

<aop:config>
  <aop:advisor 
      pointcut="com.xyz.someapp.SystemArchitecture.businessService()"
      advice-ref="tx-advice"/>
</aop:config>

<tx:advice id="tx-advice">
<tx:attributes>
    <tx:method name="*" propagation="REQUIRED"/>
  </tx:attributes>
</tx:advice>

The <aop:config> and <aop:advisor> tags are discussed in the section entitled Section 6.3, “Schema-based AOP support”. The transaction tags are discussed in the chapter entitled Chapter 9, Transaction management.

6.2.3.4. Examples

Spring AOP users are likely to use the execution pointcut designator the most often. The format of an execution expression is:

execution(modifiers-pattern? ret-type-pattern declaring-type-pattern? name-pattern(param-pattern)
          throws-pattern?)

All parts except the returning type pattern (ret-type-pattern in the snippet above), name pattern, and parameters pattern are optional. The returning type pattern determines what the return type of the method must be in order for a join point to be matched. Most frequently you will use * as the returning type pattern, which matches any return type. A fully-qualified type name will match only when the method returns the given type. The name pattern matches the method name. You can use the * wildcard as all or part of a name pattern. The parameters pattern is slightly more complex: () matches a method that takes no parameters, whereas (..) matches any number of parameters (zero or more). The pattern (*) matches a method taking one parameter of any type, (*,String) matches a method taking two parameters, the first can be of any type, the second must be a String. Consult the Language Semantics section of the AspectJ Programming Guide for more information.

Some examples of common pointcut expressions are given below.

  • the execution of any public method:

    execution(public * *(..))
  • the execution of any method with a name beginning with "set":

    execution(* set*(..))
  • the execution of any method defined by the AccountService interface:

    execution(* com.xyz.service.AccountService.*(..))
  • the execution of any method defined in the service package:

    execution(* com.xyz.service.*.*(..))
  • the execution of any method defined in the service package or a sub-package:

    execution(* com.xyz.service..*.*(..))
  • any join point (method execution only in Spring AOP) within the service package:

    within(com.xyz.service.*)
  • any join point (method execution only in Spring AOP) within the service package or a sub-package:

    within(com.xyz.service..*)
  • any join point (method execution only in Spring AOP) where the proxy implements the AccountService interface:

    this(com.xyz.service.AccountService)

    'this' is more commonly used in a binding form :- see the following section on advice for how to make the proxy object available in the advice body.

  • any join point (method execution only in Spring AOP) where the target object implements the AccountService interface:

    target(com.xyz.service.AccountService)

    'target' is more commonly used in a binding form :- see the following section on advice for how to make the target object available in the advice body.

  • any join point (method execution only in Spring AOP) which takes a single parameter, and where the argument passed at runtime is Serializable:

    args(java.io.Serializable)

    'args' is more commonly used in a binding form :- see the following section on advice for how to make the method arguments available in the advice body.

    Note that the pointcut given in this example is different to execution(* *(java.io.Serializable)): the args version matches if the argument passed at runtime is Serializable, the execution version matches if the method signature declares a single parameter of type Serializable.

  • any join point (method execution only in Spring AOP) where the target object has an @Transactional annotation:

    @target(org.springframework.transaction.annotation.Transactional)

    '@target' can also be used in a binding form :- see the following section on advice for how to make the annotation object available in the advice body.

  • any join point (method execution only in Spring AOP) where the declared type of the target object has an @Transactional annotation:

    @within(org.springframework.transaction.annotation.Transactional)

    '@within' can also be used in a binding form :- see the following section on advice for how to make the annotation object available in the advice body.

  • any join point (method execution only in Spring AOP) where the executing method has an @Transactional annotation:

    @annotation(org.springframework.transaction.annotation.Transactional)

    '@annotation' can also be used in a binding form :- see the following section on advice for how to make the annotation object available in the advice body.

  • any join point (method execution only in Spring AOP) which takes a single parameter, and where the runtime type of the argument passed has the @Classified annotation:

    @args(com.xyz.security.Classified)

    '@args' can also be used in a binding form :- see the following section on advice for how to make the annotation object(s) available in the advice body.

6.2.4. Declaring advice

Advice is associated with a pointcut expression, and runs before, after, or around method executions matched by the pointcut. The pointcut expression may be either a simple reference to a named pointcut, or a pointcut expression declared in place.

6.2.4.1. Before advice

Before advice is declared in an aspect using the @Before annotation:

import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;

@Aspect
public class BeforeExample {

  @Before("com.xyz.myapp.SystemArchitecture.dataAccessOperation()")
  public void doAccessCheck() {
    // ...
  }

}

If using an in-place pointcut expression we could rewrite the above example as:

import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;

@Aspect
public class BeforeExample {

  @Before("execution(* com.xyz.myapp.dao.*.*(..))")
  public void doAccessCheck() {
    // ...
  }

}

6.2.4.2. After returning advice

After returning advice runs when a matched method execution returns normally. It is declared using the @AfterReturning annotation:

import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.AfterReturning;

@Aspect
public class AfterReturningExample {

  @AfterReturning("com.xyz.myapp.SystemArchitecture.dataAccessOperation()")
  public void doAccessCheck() {
    // ...
  }

}

Note: it is of course possible to have multiple advice declarations, and other members as well, all inside the same aspect. We're just showing a single advice declaration in these examples to focus on the issue under discussion at the time.

Sometimes you need access in the advice body to the actual value that was returned. You can use the form of @AfterReturning that binds the return value for this:

import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.AfterReturning;

@Aspect
public class AfterReturningExample {

  @AfterReturning(
    pointcut="com.xyz.myapp.SystemArchitecture.dataAccessOperation()",
    returning="retVal")
  public void doAccessCheck(Object retVal) {
    // ...
  }
  
}

The name used in the returning attribute must correspond to the name of a parameter in the advice method. When a method execution returns, the return value will be passed to the advice method as the corresponding argument value. A returning clause also restricts matching to only those method executions that return a value of the specified type (Object in this case, which will match any return value).

Please note that it is not possible to return a totally different reference when using after-returning advice.

6.2.4.3. After throwing advice

After throwing advice runs when a matched method execution exits by throwing an exception. It is declared using the @AfterThrowing annotation:

import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.AfterThrowing;

@Aspect
public class AfterThrowingExample {

  @AfterThrowing("com.xyz.myapp.SystemArchitecture.dataAccessOperation()")
  public void doRecoveryActions() {
    // ...
  }

}

Often you want the advice to run only when exceptions of a given type are thrown, and you also often need access to the thrown exception in the advice body. Use the throwing attribute to both restrict matching (if desired, use Throwable as the exception type otherwise) and bind the thrown exception to an advice parameter.

import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.AfterThrowing;

@Aspect
public class AfterThrowingExample {

  @AfterThrowing(
    pointcut="com.xyz.myapp.SystemArchitecture.dataAccessOperation()",
    throwing="ex")
  public void doRecoveryActions(DataAccessException ex) {
    // ...
  }

}

The name used in the throwing attribute must correspond to the name of a parameter in the advice method. When a method execution exits by throwing an exception, the exception will be passed to the advice method as the corresponding argument value. A throwing clause also restricts matching to only those method executions that throw an exception of the specified type (DataAccessException in this case).

6.2.4.4. After (finally) advice

After (finally) advice runs however a matched method execution exits. It is declared using the @After annotation. After advice must be prepared to handle both normal and exception return conditions. It is typically used for releasing resources, etc.

import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.After;

@Aspect
public class AfterFinallyExample {

  @After("com.xyz.myapp.SystemArchitecture.dataAccessOperation()")
  public void doReleaseLock() {
    // ...
  }

}

6.2.4.5. Around advice

The final kind of advice is around advice. Around advice runs "around" a matched method execution. It has the opportunity to do work both before and after the method executes, and to determine when, how, and even if, the method actually gets to execute at all. Around advice is often used if you need to share state before and after a method execution in a thread-safe manner (starting and stopping a timer for example). Always use the least powerful form of advice that meets your requirements (i.e. don't use around advice if simple before advice would do).

Around advice is declared using the @Around annotation. The first parameter of the advice method must be of type ProceedingJoinPoint. Within the body of the advice, calling proceed() on the ProceedingJoinPoint causes the underlying method to execute. The proceed method may also be called passing in an Object[] - the values in the array will be used as the arguments to the method execution when it proceeds.

The behavior of proceed when called with an Object[] is a little different than the behavior of proceed for around advice compiled by the AspectJ compiler. For around advice written using the traditional AspectJ language, the number of arguments passed to proceed must match the number of arguments passed to the around advice (not the number of arguments taken by the underlying join point), and the value passed to proceed in a given argument position supplants the original value at the join point for the entity the value was bound to. (Don't worry if this doesn't make sense right now!) The approach taken by Spring is simpler and a better match to its proxy-based, execution only semantics. You only need to be aware of this difference if you compiling @AspectJ aspects written for Spring and using proceed with arguments with the AspectJ compiler and weaver. There is a way to write such aspects that is 100% compatible across both Spring AOP and AspectJ, and this is discussed in the following section on advice parameters.

import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.ProceedingJoinPoint;

@Aspect
public class AroundExample {

  @Around("com.xyz.myapp.SystemArchitecture.businessService()")
  public Object doBasicProfiling(ProceedingJoinPoint pjp) throws Throwable {
    // start stopwatch
    Object retVal = pjp.proceed();
    // stop stopwatch
    return retVal;
  }

}

The value returned by the around advice will be the return value seen by the caller of the method. A simple caching aspect for example could return a value from a cache if it has one, and invoke proceed() if it does not. Note that proceed may be invoked once, many times, or not at all within the body of the around advice, all of these are quite legal.

6.2.4.6. Advice parameters

Spring 2.0 offers fully typed advice - meaning that you declare the parameters you need in the advice signature (as we saw for the returning and throwing examples above) rather than work with Object[] arrays all the time. We'll see how to make argument and other contextual values available to the advice body in a moment. First let's take a look at how to write generic advice that can find out about the method the advice is currently advising.

6.2.4.6.1. Access to the current JoinPoint

Any advice method may declare as its first parameter, a parameter of type org.aspectj.lang.JoinPoint (please note that around advice is required to declare a first parameter of type ProceedingJoinPoint, which is a subclass of JoinPoint. The JoinPoint interface provides a number of useful methods such as getArgs() (returns the method arguments), getThis() (returns the proxy object), getTarget() (returns the target object), getSignature() (returns a description of the method that is being advised) and toString() (prints a useful description of the method being advised). Please do consult the Javadocs for full details.

6.2.4.6.2. Passing parameters to advice

We've already seen how to bind the returned value or exception value (using after returning and after throwing advice). To make argument values available to the advice body, you can use the binding form of args. If a parameter name is used in place of a type name in an args expression, then the value of the corresponding argument will be passed as the parameter value when the advice is invoked. An example should make this clearer. Suppose you want to advise the execution of dao operations that take an Account object as the first parameter, and you need access to the account in the advice body. You could write the following:

@Before("com.xyz.myapp.SystemArchitecture.dataAccessOperation() &&" + 
        "args(account,..)")
public void validateAccount(Account account) {
  // ...
}

The args(account,..) part of the pointcut expression serves two purposes: firstly, it restricts matching to only those method executions where the method takes at least one parameter, and the argument passed to that parameter is an instance of Account; secondly, it makes the actual Account object available to the advice via the account parameter.

Another way of writing this is to declare a pointcut that "provides" the Account object value when it matches a join point, and then just refer to the named pointcut from the advice. This would look as follows:

@Pointcut("com.xyz.myapp.SystemArchitecture.dataAccessOperation() &&" + 
          "args(account,..)")
private void accountDataAccessOperation(Account account) {}

@Before("accountDataAccessOperation(account)")
public void validateAccount(Account account) {
  // ...
}

The interested reader is once more referred to the AspectJ programming guide for more details.

The proxy object (this), target object (target), and annotations (@within, @target, @annotation, @args) can all be bound in a similar fashion. The following example shows how you could match the execution of methods annotated with an @Auditable annotation, and extract the audit code.

First the definition of the @Auditable annotation:

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Auditable {
	AuditCode value();
}

And then the advice that matches the execution of @Auditable methods:

@Before("com.xyz.lib.Pointcuts.anyPublicMethod() && " + 
        "@annotation(auditable)")
public void audit(Auditable auditable) {
  AuditCode code = auditable.value();
  // ...
}
6.2.4.6.3. Determining argument names

The parameter binding in advice invocations relies on matching names used in pointcut expressions to declared parameter names in (advice and pointcut) method signatures. Parameter names are not available through Java reflection, so Spring AOP uses the following strategies to determine parameter names:

  1. If the parameter names have been specified by the user explicitly, then the specified parameter names are used: both the advice and the pointcut annotations have an optional "argNames" attribute which can be used to specify the argument names of the annotated method - these argument names are available at runtime. For example:

    @Before(
       value="com.xyz.lib.Pointcuts.anyPublicMethod() && @annotation(auditable)",
       argNames="auditable")
    public void audit(Auditable auditable) {
      AuditCode code = auditable.value();
      // ...
    }

    If an @AspectJ aspect has been compiled by the AspectJ compiler (ajc) then there is no need to add the argNames attribute as the compiler will do this automatically.

  2. Using the 'argNames' attribute is a little clumsy, so if the 'argNames' attribute has not been specified, then Spring AOP will look at the debug information for the class and try to determine the parameter names from the local variable table. This information will be present as long as the classes have been compiled with debug information ('-g:vars' at a minimum). The consequences of compiling with this flag on are: (1) your code will be slightly easier to understand (reverse engineer), (2) the class file sizes will be very slightly bigger (typically inconsequential), (3) the optimization to remove unused local variables will not be applied by your compiler. In other words, you should encounter no difficulties building with this flag on.

  3. If the code has been compiled without the necessary debug information, then Spring AOP will attempt to deduce the pairing of binding variables to parameters (for example, if only one variable is bound in the pointcut expression, and the advice method only takes one parameter, the pairing is obvious!). If the binding of variables is ambiguous given the available information, then an AmbiguousBindingException will be thrown.

  4. If all of the above strategies fail then an IllegalArgumentException will be thrown.

6.2.4.6.4. Proceeding with arguments

We remarked earlier that we would describe how to write a proceed call with arguments that works consistently across Spring AOP and AspectJ. The solution is simply to ensure that the advice signature binds each of the method parameters in order. For example:

@Around("execution(List<Account> find*(..)) &&" +
        "com.xyz.myapp.SystemArchitecture.inDataAccessLayer() && " +
        "args(accountHolderNamePattern)")		
public Object preProcessQueryPattern(ProceedingJoinPoint pjp, String accountHolderNamePattern)
throws Throwable {
  String newPattern = preProcess(accountHolderNamePattern);
  return pjp.proceed(new Object[] {newPattern});
}        

In many cases you will be doing this binding anyway (as in the example above).

6.2.4.7. Advice ordering

What happens when multiple pieces of advice all want to run at the same join point? Spring AOP follows the same precedence rules as AspectJ to determine the order of advice execution. The highest precedence advice runs first "on the way in" (so given two pieces of before advice, the one with highest precedence runs first). "On the way out" from a join point, the highest precedence advice runs last (so given two pieces of after advice, the one with the highest precedence will run second). For advice defined within the same aspect, precedence is established by declaration order. Given the aspect:

@Aspect
public class AspectWithMultipleAdviceDeclarations {

  @Pointcut("execution(* foo(..))")
  public void fooExecution() {}
  
  @Before("fooExecution()")
  public void doBeforeOne() {
    // ...
  }
  
  @Before("fooExecution()")
  public void doBeforeTwo() {
    // ...
  }
  
  @AfterReturning("fooExecution()")
  public void doAfterOne() {
    // ...
  }

  @AfterReturning("fooExecution()")
  public void doAfterTwo() {
    // ...
  }

}

then for any execution of a method named foo, the doBeforeOne, doBeforeTwo, doAfterOne, and doAfterTwo advice methods all need to run. The precedence rules are such that the advice will execute in declaration order. In this case the execution trace would be:

doBeforeOne
doBeforeTwo
foo
doAfterOne
doAfterTwo

In other words doBeforeOne has precedence over doBeforeTwo, because it was defined before doBeforeTwo, and doAfterTwo has precedence over doAfterOne because it was defined after doAfterOne. It's easiest just to remember that advice runs in declaration order ;) - see the AspectJ Programming Guide for full details.

When two pieces of advice defined in different aspects both need to run at the same join point, unless you specify otherwise the order of execution is undefined. You can control the order of execution by specifying precedence. This is done in the normal Spring way by either implementing the org.springframework.core.Ordered interface in the aspect class or annotating it with the Order annotation. Given two aspects, the aspect returning the lower value from Ordered.getValue() (or the annotation value) has the higher precedence.

6.2.5. Introductions

Introductions (known as inter-type declarations in AspectJ) enable an aspect to declare that advised objects implement a given interface, and to provide an implementation of that interface on behalf of those objects.

An introduction is made using the @DeclareParents annotation. This annotation is used to declare that matching types have a new parent (hence the name). For example, given an interface UsageTracked, and an implementation of that interface DefaultUsageTracked, the following aspect declares that all implementors of service interfaces also implement the UsageTracked interface. (In order to expose statistics via JMX for example.)

@Aspect
public class UsageTracking {

  @DeclareParents(value="com.xzy.myapp.service.*+",
                  defaultImpl=DefaultUsageTracked.class)
  public static UsageTracked mixin;
  
  @Before("com.xyz.myapp.SystemArchitecture.businessService() &&" +
          "this(usageTracked)")
  public void recordUsage(UsageTracked usageTracked) {
    usageTracked.incrementUseCount();
  }
  
}

The interface to be implemented is determined by the type of the annotated field. The value attribute of the @DeclareParents annotation is an AspectJ type pattern :- any bean of a matching type will implement the UsageTracked interface. Note that in the before advice of the above example, service beans can be directly used as implementations of the UsageTracked interface. If accessing a bean programmatically you would write the following:

UsageTracked usageTracked = (UsageTracked) context.getBean("myService");

6.2.6. Aspect instantiation models

(This is an advanced topic, so if you are just starting out with AOP you can safely skip it until later.)

By default there will be a single instance of each aspect within the application context. AspectJ calls this the singleton instantiation model. It is possible to define aspects with alternate lifecycles :- Spring supports AspectJ's perthis and pertarget instantiation models (percflow, percflowbelow, and pertypewithin are not currently supported).

A "perthis" aspect is declared by specifying a perthis clause in the @Aspect annotation. Let's look at an example, and then we'll explain how it works.

@Aspect("perthis(com.xyz.myapp.SystemArchitecture.businessService())")
public class MyAspect {

  private int someState;
	
  @Before(com.xyz.myapp.SystemArchitecture.businessService())
  public void recordServiceUsage() {
    // ...
  }
  	
}

The effect of the 'perthis' clause is that one aspect instance will be created for each unique service object executing a business service (each unique object bound to 'this' at join points matched by the pointcut expression). The aspect instance is created the first time that a method is invoked on the service object. The aspect goes out of scope when the service object goes out of scope. Before the aspect instance is created, none of the advice within it executes. As soon as the aspect instance has been created, the advice declared within it will execute at matched join points, but only when the service object is the one this aspect is associated with. See the AspectJ programming guide for more information on per-clauses.

The 'pertarget' instantiation model works in exactly the same way as perthis, but creates one aspect instance for each unique target object at matched join points.

6.2.7. Example

Now that you have seen how all the constituent parts work, let's put them together to do something useful!

The execution of business services can sometimes fail due to concurrency issues (for example, deadlock loser). If the operation is retried, it is quite likely to succeed next time round. For business services where it is appropriate to retry in such conditions (idempotent operations that don't need to go back to the user for conflict resolution), we'd like to transparently retry the operation to avoid the client seeing a PessimisticLockingFailureException. This is a requirement that clearly cuts across multiple services in the service layer, and hence is ideal for implementing via an aspect.

Because we want to retry the operation, we will need to use around advice so that we can call proceed multiple times. Here's how the basic aspect implementation looks:

@Aspect
public class ConcurrentOperationExecutor implements Ordered {
   
   private static final int DEFAULT_MAX_RETRIES = 2;

   private int maxRetries = DEFAULT_MAX_RETRIES;
   private int order = 1;

   public void setMaxRetries(int maxRetries) {
      this.maxRetries = maxRetries;
   }
   
   public int getOrder() {
      return this.order;
   }
   
   public void setOrder(int order) {
      this.order = order;
   }
   
   @Around("com.xyz.myapp.SystemArchitecture.businessService()")
   public Object doConcurrentOperation(ProceedingJoinPoint pjp) throws Throwable { 
      int numAttempts = 0;
      PessimisticLockingFailureException lockFailureException;
      do {
         numAttempts++;
         try { 
            return pjp.proceed();
         }
         catch(PessimisticLockingFailureException ex) {
            lockFailureException = ex;
         }
      }
      while(numAttempts <= this.maxRetries);
      throw lockFailureException;
   }

}

Note that the aspect implements the Ordered interface so we can set the precedence of the aspect higher than the transaction advice (we want a fresh transaction each time we retry). The maxRetries and order properties will both be configured by Spring. The main action happens in the doConcurrentOperation around advice. Notice that for the moment we're applying the retry logic to all businessService()s. We try to proceed, and if we fail with an PessimisticLockingFailureException we simply try again unless we have exhausted all of our retry attempts.

The corresponding Spring configuration is:

<aop:aspectj-autoproxy/>

<bean id="concurrentOperationExecutor"
  class="com.xyz.myapp.service.impl.ConcurrentOperationExecutor">
     <property name="maxRetries" value="3"/>
     <property name="order" value="100"/>  
</bean>

To refine the aspect so that it only retries idempotent operations, we might define an Idempotent annotation:

@Retention(RetentionPolicy.RUNTIME)
public @interface Idempotent {
  // marker annotation
}

and use the annotation to annotate the implementation of service operations. The change to the aspect to only retry idempotent operations simply involves refining the pointcut expression so that only @Idempotent operations match:

@Around("com.xyz.myapp.SystemArchitecture.businessService() && " + 
        "@annotation(com.xyz.myapp.service.Idempotent)")
public Object doConcurrentOperation(ProceedingJoinPoint pjp) throws Throwable { 
  ...	
}

6.3. Schema-based AOP support

If you are unable to use Java 5, or simply prefer an XML-based format, then Spring 2.0 also offers support for defining aspects using the new "aop" namespace tags. The exact same pointcut expressions and advice kinds are supported as when using the @AspectJ style, hence in this section we will focus on the new syntax and refer the reader to the discussion in the previous section (Section 6.2, “@AspectJ support”) for an understanding of writing pointcut expressions and the binding of advice parameters.

To use the aop namespace tags described in this section, you need to import the spring-aop schema as described in Appendix A, XML Schema-based configuration. See Section A.2.6, “The aop schema” for how to import the tags in the aop namespace.

Within your Spring configurations, all aspect and advisor elements must be placed within an <aop:config> element (you can have more than one <aop:config> element in an application context configuration). An <aop:config> element can contain pointcut, advisor, and aspect elements (note these must be declared in that order).

[Warning]Warning

The <aop:config> style of configuration makes heavy use of Spring's auto-proxying mechanism. This can cause issues (such as advice not being woven) if you are already using explicit auto-proxying via the use of BeanNameAutoProxyCreator or suchlike. The recommended usage pattern is to use either just the <aop:config> style, or just the AutoProxyCreator style.

6.3.1. Declaring an aspect

Using the schema support, an aspect is simply a regular Java object defined as a bean in your Spring application context. The state and behavior is captured in the fields and methods of the object, and the pointcut and advice information is captured in the XML.

An aspect is declared using the <aop:aspect> element, and the backing bean is referenced using the ref attribute:

<aop:config>
  <aop:aspect id="myAspect" ref="aBean">
    ...
  </aop:aspect>
</aop:config>

<bean id="aBean" class="...">
  ...
</bean>

The bean backing the aspect ("aBean" in this case) can of course be configured and dependency injected just like any other Spring bean.

6.3.2. Declaring a pointcut

A pointcut can be declared inside an aspect, in which case it is visible only within that aspect. A pointcut can also be declared directly inside an <aop:config> element, enabling the pointcut definition to be shared across several aspects and advisors.

A pointcut representing the execution of any business service in the service layer could be defined as follows:

<aop:config>

  <aop:pointcut id="businessService" 
        expression="execution(* com.xyz.myapp.service.*.*(..))"/>

</aop:config>

Note that the pointcut expression itself is using the same AspectJ pointcut expression language as described in Section 6.2, “@AspectJ support”. If you are using the schema based declaration style with Java 5, you can refer to named pointcuts defined in types (@Aspects) within the pointcut expression, but this feature is not available on JDK 1.4 and below (it relies on the Java 5 specific AspectJ reflection APIs). On JDK 1.5 therefore, another way of defining the above pointcut would be:

<aop:config>

  <aop:pointcut id="businessService" 
        expression="com.xyz.myapp.SystemArchitecture.businessService()"/>

</aop:config>

Assuming you have a SystemArchitecture aspect as described in Section 6.2.3.3, “Sharing common pointcut definitions”.

Declaring a pointcut inside an aspect is very similar to declaring a top-level pointcut:

<aop:config>

  <aop:aspect id="myAspect" ref="aBean">

    <aop:pointcut id="businessService" 
          expression="execution(* com.xyz.myapp.service.*.*(..))"/>
          
    ...
    
  </aop:aspect>

</aop:config>

When combining pointcut sub-expressions, '&&' is awkward within an XML document, and so the keywords 'and', 'or' and 'not' can be used in place of '&&', '||' and '!' respectively.

Note that pointcuts defined in this way are referred to by their XML id, and cannot define pointcut parameters. The named pointcut support in the schema based definition style is thus more limited than that offered by the @AspectJ style.

6.3.3. Declaring advice

The same five advice kinds are supported as for the @AspectJ style, and they have exactly the same semantics.

6.3.3.1. Before advice

Before advice runs before a matched method execution. It is declared inside an <aop:aspect> using the <aop:before> element.

<aop:aspect id="beforeExample" ref="aBean">

    <aop:before 
      pointcut-ref="dataAccessOperation" 
      method="doAccessCheck"/>
          
    ...
    
</aop:aspect>

Here dataAccessOperation is the id of a pointcut defined at the top (<aop:config>) level. To define the pointcut inline instead, replace the pointcut-ref attribute with a pointcut attribute:

<aop:aspect id="beforeExample" ref="aBean">

    <aop:before 
      pointcut="execution(* com.xyz.myapp.dao.*.*(..))" 
      method="doAccessCheck"/>
          
    ...
    
</aop:aspect>

As we noted in the discussion of the @AspectJ style, using named pointcuts can significantly improve the readability of your code.

The method attribute identifies a method (doAccessCheck) that provides the body of the advice. This method must be defined for the bean referenced by the aspect element containing the advice. Before a data access operation is executed (a method execution join point matched by the pointcut expression), the "doAccessCheck" method on the aspect bean will be invoked.

6.3.3.2. After returning advice

After returning advice runs when a matched method execution completes normally. It is declared inside an <aop:aspect> in the same way as before advice. For example:

<aop:aspect id="afterReturningExample" ref="aBean">

    <aop:after-returning 
      pointcut-ref="dataAccessOperation" 
      method="doAccessCheck"/>
          
    ...
    
</aop:aspect>

Just as in the @AspectJ style, it is possible to get hold of the return value within the advice body. Use the returning attribute to specify the name of the parameter to which the return value should be passed:

<aop:aspect id="afterReturningExample" ref="aBean">

    <aop:after-returning 
      pointcut-ref="dataAccessOperation"
      returning="retVal" 
      method="doAccessCheck"/>
          
    ...
    
</aop:aspect>

The doAccessCheck method must declare a parameter named retVal. The type of this parameter constrains matching in the same way as described for @AfterReturning. For example, the method signature may be declared as:

public void doAccessCheck(Object retVal) {...

6.3.3.3. After throwing advice

After throwing advice executes when a matched method execution exits by throwing an exception. It is declared inside an <aop:aspect> using the after-throwing element:

<aop:aspect id="afterThrowingExample" ref="aBean">

    <aop:after-throwing
      pointcut-ref="dataAccessOperation" 
      method="doRecoveryActions"/>
          
    ...
    
</aop:aspect>

Just as in the @AspectJ style, it is possible to get hold of the thrown exception within the advice body. Use the throwing attribute to specify the name of the parameter to which the exception should be passed:

<aop:aspect id="afterThrowingExample" ref="aBean">

    <aop:after-throwing 
      pointcut-ref="dataAccessOperation"
      throwing="dataAccessEx" 
      method="doRecoveryActions"/>
          
    ...
    
</aop:aspect>

The doRecoveryActions method must declare a parameter named dataAccessEx. The type of this parameter constrains matching in the same way as described for @AfterThrowing. For example, the method signature may be declared as:

public void doRecoveryActions(DataAccessException dataAccessEx) {...

6.3.3.4. After (finally) advice

After (finally) advice runs however a matched method execution exits. It is declared using the after element:

<aop:aspect id="afterFinallyExample" ref="aBean">

    <aop:after
      pointcut-ref="dataAccessOperation" 
      method="doReleaseLock"/>
          
    ...
    
</aop:aspect>

6.3.3.5. Around advice

The final kind of advice is around advice. Around advice runs "around" a matched method execution. It has the opportunity to do work both before and after the method executes, and to determine when, how, and even if, the method actually gets to execute at all. Around advice is often used if you need to share state before and after a method execution in a thread-safe manner (starting and stopping a timer for example). Always use the least powerful form of advice that meets your requirements; don't use around advice if simple before advice would do.

Around advice is declared using the aop:around element. The first parameter of the advice method must be of type ProceedingJoinPoint. Within the body of the advice, calling proceed() on the ProceedingJoinPoint causes the underlying method to execute. The proceed method may also be calling passing in an Object[] - the values in the array will be used as the arguments to the method execution when it proceeds. See Section 6.2.4.5, “Around advice” for notes on calling proceed with an Object[].

<aop:aspect id="aroundExample" ref="aBean">

    <aop:around
      pointcut-ref="businessService" 
      method="doBasicProfiling"/>
          
    ...
    
</aop:aspect>

The implementation of the doBasicProfiling advice would be exactly the same as in the @AspectJ example (minus the annotation of course):

public Object doBasicProfiling(ProceedingJoinPoint pjp) throws Throwable {
    // start stopwatch
    Object retVal = pjp.proceed();
    // stop stopwatch
    return retVal;
}

6.3.3.6. Advice parameters

The schema based declaration style supports fully typed advice in the same way as described for the @AspectJ support - by matching pointcut parameters by name against advice method parameters. See Section 6.2.4.6, “Advice parameters” for details.

If you wish to explicity specify argument names for the advice methods (not relying on either of the detection strategies previously described) then this is done using the arg-names attribute of the advice element. For example:

<aop:before
  pointcut="com.xyz.lib.Pointcuts.anyPublicMethod() and @annotation(auditable)"
  method="audit"
  arg-names="auditable"/>

The arg-names attribute accepts a comma-delimited list of parameter names.

Find below a slightly more involved example of the XSD-based approach that illustrates some around advice used in conjunction with a number of strongly typed parameters.

package x.y.service;

public interface FooService {

   Foo getFoo(String fooName, int age);
}

public class DefaultFooService implements FooService {

   public Foo getFoo(String name, int age) {
      return new Foo(name, age);
   }
}

Next up is the aspect. Notice the fact that the profile(..) method accepts a number of strongly-typed parameters, the first of which happens to be the join point used to proceed with the method call: the presence of this parameter is an indication that the profile(..) is to be used as around advice:

package x.y;

import org.aspectj.lang.ProceedingJoinPoint;
import org.springframework.util.StopWatch;

public class SimpleProfiler {

   public Object profile(ProceedingJoinPoint call, String name, int age) throws Throwable {
      StopWatch clock = new StopWatch(
            "Profiling for '" + name + "' and '" + age + "'");
      try {
         clock.start(call.toShortString());
         return call.proceed();
      } finally {
         clock.stop();
         System.out.println(clock.prettyPrint());
      }
   }
}

Finally, here is the XML configuration that is required to effect the execution of the above advice for a particular joinpoint:

<beans xmlns="http://www.springframework.org/schema/beans"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xmlns:aop="http://www.springframework.org/schema/aop"
      xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd">

   <!-- this is the object that will be proxied by Spring's AOP infrastructure -->
   <bean id="fooService" class="x.y.service.DefaultFooService"/>

   <!-- this is the actual advice itself -->
   <bean id="profiler" class="x.y.SimpleProfiler"/>

   <aop:config>
      <aop:aspect ref="profiler">

         <aop:pointcut id="theExecutionOfSomeFooServiceMethod"
                    expression="execution(* x.y.service.FooService.getFoo(String,int))
                    and args(name, age)"/>

         <aop:around pointcut-ref="theExecutionOfSomeFooServiceMethod"
                  method="profile"/>

      </aop:aspect>
   </aop:config>

</beans>

If we had the following driver script, we would get output something like this on standard output:

import org.springframework.beans.factory.BeanFactory;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import x.y.service.FooService;

public final class Boot {

   public static void main(final String[] args) throws Exception {
      BeanFactory ctx = new ClassPathXmlApplicationContext("x/y/plain.xml");
      FooService foo = (FooService) ctx.getBean("fooService");
      foo.getFoo("Pengo", 12);
   }
}
StopWatch 'Profiling for 'Pengo' and '12'': running time (millis) = 0
-----------------------------------------
ms     %     Task name
-----------------------------------------
00000  ?  execution(getFoo)

6.3.3.7. Advice ordering

When multiple advice needs to execute at the same join point (executing method) the ordering rules are as described in Section 6.2.4.7, “Advice ordering”. The precedence between aspects is determined by either adding the Order annotation to the bean backing the aspect or by having the bean implement the Ordered interface.

6.3.4. Introductions

Introductions (known as inter-type declarations in AspectJ) enable an aspect to declare that advised objects implement a given interface, and to provide an implementation of that interface on behalf of those objects.

An introduction is made using the aop:declare-parents element inside an aop:aspect This element is used to declare that matching types have a new parent (hence the name). For example, given an interface UsageTracked, and an implementation of that interface DefaultUsageTracked, the following aspect declares that all implementors of service interfaces also implement the UsageTracked interface. (In order to expose statistics via JMX for example.)

<aop:aspect id="usageTrackerAspect" ref="usageTracking">

  <aop:declare-parents
      types-matching="com.xzy.myapp.service.*+",
      implement-interface="UsageTracked"
      default-impl="com.xyz.myapp.service.tracking.DefaultUsageTracked"/>
  
  <aop:before
    pointcut="com.xyz.myapp.SystemArchitecture.businessService()
              and this(usageTracked)"
    method="recordUsage"/>
  
</aop:aspect>

The class backing the usageTracking bean would contain the method:

public void recordUsage(UsageTracked usageTracked) {
    usageTracked.incrementUseCount();
}

The interface to be implemented is determined by implement-interface attribute. The value of the types-matching attribute is an AspectJ type pattern :- any bean of a matching type will implement the UsageTracked interface. Note that in the before advice of the above example, service beans can be directly used as implementations of the UsageTracked interface. If accessing a bean programmatically you would write the following:

UsageTracked usageTracked = (UsageTracked) context.getBean("myService");

6.3.5. Aspect instantiation models

The only supported instantiation model for schema-defined aspects is the singleton model. Other instantiation models may be supported in future releases.

6.3.6. Advisors

The concept of "advisors" is brought forward from the AOP support defined in Spring 1.2 and does not have a direct equivalent in AspectJ. An advisor is like a small self-contained aspect that has a single piece of advice. The advice itself is represented by a bean, and must implement one of the advice interfaces described in Section 7.3.2, “Advice types in Spring”. Advisors can take advantage of AspectJ pointcut expressions though.

Spring 2.0 supports the advisor concept with the <aop:advisor> element. You will most commonly see it used in conjunction with transactional advice, which also has its own namespace support in Spring 2.0. Here's how it looks:

<aop:config>

  <aop:pointcut id="businessService"
        expression="execution(* com.xyz.myapp.service.*.*(..))"/>

  <aop:advisor 
      pointcut-ref="businessService"
      advice-ref="tx-advice"/>
      
</aop:config>

<tx:advice id="tx-advice">
  <tx:attributes>
    <tx:method name="*" propagation="REQUIRED"/>
  </tx:attributes>
</tx:advice>

As well as the pointcut-ref attribute used in the above example, you can also use the pointcut attribute to define a pointcut expression inline.

To define the precedence of an advisor so that the advice can participate in ordering, use the order attribute to define the Ordered value of the advisor.

6.3.7. Example

Let's see how the concurrent locking failure retry example from Section 6.2.7, “Example” looks when rewritten using the schema support.

The execution of business services can sometimes fail due to concurrency issues (for example, deadlock loser). If the operation is retried, it is quite likely it will succeed next time round. For business services where it is appropriate to retry in such conditions (idempotent operations that don't need to go back to the user for conflict resolution), we'd like to transparently retry the operation to avoid the client seeing a PessimisticLockingFailureException. This is a requirement that clearly cuts across multiple services in the service layer, and hence is ideal for implementing via an aspect.

Because we want to retry the operation, we'll need to use around advice so that we can call proceed multiple times. Here's how the basic aspect implementation looks (it's just a regular Java class using the schema support):

public class ConcurrentOperationExecutor implements Ordered {
   
   private static final int DEFAULT_MAX_RETRIES = 2;

   private int maxRetries = DEFAULT_MAX_RETRIES;
   private int order = 1;

   public void setMaxRetries(int maxRetries) {
      this.maxRetries = maxRetries;
   }
   
   public int getOrder() {
      return this.order;
   }
   
   public void setOrder(int order) {
      this.order = order;
   }
   
   public Object doConcurrentOperation(ProceedingJoinPoint pjp) throws Throwable { 
      int numAttempts = 0;
      PessimisticLockingFailureException lockFailureException;
      do {
         numAttempts++;
         try { 
            return pjp.proceed();
         }
         catch(PessimisticLockingFailureException ex) {
            lockFailureException = ex;
         }
      }
      while(numAttempts <= this.maxRetries);
      throw lockFailureException;
   }

}

Note that the aspect implements the Ordered interface so we can set the precedence of the aspect higher than the transaction advice (we want a fresh transaction each time we retry). The maxRetries and order properties will both be configured by Spring. The main action happens in the doConcurrentOperation around advice method. We try to proceed, and if we fail with a PessimisticLockingFailureException we simply try again unless we have exhausted all of our retry attempts.

This class is identical to the one used in the @AspectJ example, but with the annotations removed.

The corresponding Spring configuration is:

<aop:config>

  <aop:aspect id="concurrentOperationRetry" ref="concurrentOperationExecutor">

    <aop:pointcut id="idempotentOperation"
        expression="execution(* com.xyz.myapp.service.*.*(..))"/>
       
    <aop:around
       pointcut-ref="idempotentOperation"
       method="doConcurrentOperation"/>
  
  </aop:aspect>

</aop:config>

<bean id="concurrentOperationExecutor"
  class="com.xyz.myapp.service.impl.ConcurrentOperationExecutor">
     <property name="maxRetries" value="3"/>
     <property name="order" value="100"/>  
</bean>

Notice that for the time being we assume that all business services are idempotent. If this is not the case we can refine the aspect so that it only retries genuinely idempotent operations, by introducing an Idempotent annotation:

@Retention(RetentionPolicy.RUNTIME)
public @interface Idempotent {
  // marker annotation
}

and using the annotation to annotate the implementation of service operations. The change to the aspect to only retry idempotent operations simply involves refining the pointcut expression so that only @Idempotent operations match:

  <aop:pointcut id="idempotentOperation"
        expression="execution(* com.xyz.myapp.service.*.*(..)) and
                    @annotation(com.xyz.myapp.service.Idempotent)"/>

6.4. Choosing which AOP declaration style to use

Once you have decided that an aspect is the best approach for implementing a given requirement, how do you decide between using Spring AOP or AspectJ, and between the Aspect language (code) style, @AspectJ annotation style, and the XML style? These decisions are influenced by a number of factors including application requirements, development tools, and team familiarity with AOP.

6.4.1. Spring AOP or full AspectJ?

Use the simplest thing that can work. Spring AOP is simpler than using full AspectJ as there is no requirement to introduce the AspectJ compiler / weaver into your development and build processes. If you only need to advise the execution of operations on Spring beans, then Spring AOP is the right choice. If you need to advise domain objects, or any other object not managed by the Spring container, then you will need to use AspectJ. You will also need to use AspectJ if you wish to advise join points other than simple method executions (for example, call join points, field get or set join points, and so on).

When using AspectJ, you have the choice of the AspectJ language syntax (also known as the "code style") or the @AspectJ annotation style. If aspects play a large role in your design, and you are able to use the AspectJ Development Tools (AJDT) in Eclipse, then the AspectJ language syntax is the preferred option: it is cleaner and simpler because the language was purposefully designed for writing aspects. If you are not using Eclipse, or have only a few aspects that do not play a major role in your application, then you may want to consider using the @AspectJ style and sticking with a regular Java compilation in your IDE, and adding an aspect weaving (linking) phase to your build scripts.

6.4.2. @AspectJ or XML for Spring AOP?

The XML style will be most familiar to existing Spring users. It can be used with any JDK level (referring to named pointcuts from within pointcut expressions does still require Java 5 though) and is backed by genuine POJOs. When using AOP as a tool to configure enterprise services (a good test is whether you consider the pointcut expression to be a part of your configuration you might want to change independently) then XML can be a good choice. With the XML style it is arguably clearer from your configuration what aspects are present in the system.

The XML style has two disadvantages. Firstly it does not fully encapsulate the implementation of the requirement it addresses in a single place. The DRY principle says that there should be a single, unambiguous, authoritative representation of any piece of knowledge within a system. When using the XML style, the knowledge of how a requirement is implemented is split across the declaration of the backing bean class, and the XML in the configuration file. When using the @AspectJ style there is a single module - the aspect - in which this information is encapsulated. Secondly, the XML style is more limited in what in can express than the @AspectJ style: only the "singleton" aspect instantiation model is supported, and it is not possible to combine named pointcuts declared in XML. For example, in the @AspectJ style we can write something like:

  @Pointcut(execution(* get*()))
  public void propertyAccess() {}

  @Pointcut(execution(org.xyz.Account+ *(..))
  public void operationReturningAnAccount() {}

  @Pointcut(propertyAccess() && operationReturningAnAccount())
  public void accountPropertyAccess() {}

In the XML style I certainly can declare the first two pointcuts:

  <aop:pointcut id="propertyAccess" 
      expression="execution(* get*())"/>

  <aop:pointcut id="operationReturningAnAccount" 
      expression="execution(org.xyz.Account+ *(..))"/>

The downside of the XML approach becomes evident in this case because I cannot define the 'accountPropertyAccess' pointcut by combining these definitions.

The @AspectJ style supports additional instantiation models, and richer pointcut composition. It has the advantage of keeping the aspect as a modular unit. It also has the advantage the @AspectJ aspects can be understood both by Spring AOP and by AspectJ - so if you later decide you need the capabilities of AspectJ to implement additional requirements then it is very easy to migrate to an AspectJ based approach.

So much for the pros and cons of each style then: which is best? If you are not using Java5 (or above) then clearly the XML-style is the best because it is the only option available to you. If you are using Java5+, then you really will have to come to your own decision as to which style suits you best. In the experience of the Spring team, we advocate the use of the @AspectJ style whenever there are aspects that do more than simple "configuration" of enterprise services. If you are writing, have written, or have access to an aspect that is not part of the business contract of a particular class (such as a tracing aspect), then the XML-style is better.

6.5. Mixing aspect types

It is perfectly possible to mix @AspectJ style aspects using the autoproxying support, schema-defined <aop:aspect> aspects, <aop:advisor> declared advisors and even proxies and interceptors defined using the Spring 1.2 style in the same configuration. All of these are implemented using the same underlying support mechanism and will co-exist without any difficulty.

6.6. Proxying mechanisms

Spring AOP uses either JDK dynamic proxies or CGLIB to create the proxy for a given target object. (JDK dynamic proxies are preferred whenever you have a choice).

If the target object to be proxied implements at least one interface then a JDK dynamic proxy will be used. All of the interfaces implemented by the target type will be proxied. If the target object does not implement any interfaces then a CGLIB proxy will be created.

If you want to force the use of CGLIB proxying (for example, to proxy every method defined for the target object, not just those implemented by its interfaces) you can do so. However, there are some issues to consider:

  • final methods cannot be advised, as they cannot be overriden.

  • You will need the CGLIB 2 binaries on your classpath, whereas dynamic proxies are available with the JDK. Spring will automatically warn you when it needs CGLIB but it isn't available on the classpath.

  • The constructor of your proxied object will be called twice. This is a natural consequence of the CGLIB proxy model whereby a subclass is generated for each proxied object. For each proxied instance, two objects are created: the actual proxied object and an instance of the subclass that implements the advice. This behavior does not show when using JDK proxies. Usually, calling the constructor of the proxied type twice, is not a huge problem, as there are usually only assignments taking place and no real logic is (and probably should be) implemented in the constructor.

To force the use of CGLIB proxies set the value of the proxy-target-class attribute of the <aop:config> element to true:

<aop:config proxy-target-class="true">
	
	<!-- other beans defined here... -->
	
</aop:config>

To force CGLIB proxying when using the @AspectJ autoproxy support, set the 'proxy-target-class' attribute of the <aop:aspectj-autoproxy> element to true:

<aop:aspectj-autoproxy proxy-target-class="true"/>

6.6.1. Understanding AOP proxies

Spring AOP is proxy-based. It is vitally important that you grasp the semantics of what that last statement actually means before you write your own aspects or use any of the Spring AOP-based aspects supplied with the Spring Framework.

Consider first the scenario where you have a plain-vanilla, un-proxied, nothing-special-about-it, straight object reference, as illustrated by the following code snippet.

public class SimplePojo implements Pojo {

   public void foo() {
      // this is a direct method call on the 'this' reference
      this.bar();
   }
   
   public void bar() {
      // some logic...
   }
}

If you invoke a method on an object reference, the method is invoked directly on that object reference, as can be seen below.

public class Main {

   public static void main(String[] args) {
   
      Pojo pojo = new SimplePojo();
      
      // this is a direct method call on the 'pojo' reference
      pojo.foo();
   }
}

Things change slightly when the reference that client code has is a proxy. Consider the following diagram and code snippet.

public class Main {

   public static void main(String[] args) {
   
      ProxyFactory factory = new ProxyFactory(new SimplePojo());
      factory.addInterface(Pojo.class);
      factory.addAdvice(new RetryAdvice());

      Pojo pojo = (Pojo) factory.getProxy();
      
      // this is a method call on the proxy!
      pojo.foo();
   }
}

The key thing to understand here is that the client code inside the main(..) of the Main class has a reference to the proxy. This means that method calls on that object reference will be calls on the proxy, and as such the proxy will be able to delegate to all of the interceptors (advice) that are relevant to that particular method call. However, once the call has finally reached the target object, the SimplePojo reference in this case, any method calls that it may make on itself, such as this.bar() or this.foo(), are going to be invoked against the this reference, and not the proxy. This has important implications. It means that self-invocation is not going to result in the advice associated with a method invocation getting a chance to execute.

Okay, so what is to be done about this? The best approach (the term best is used loosely here) is to refactor your code such that the self-invocation does not happen. For sure, this does entail some work on your part, but it is the best, least-invasive approach. The next approach is absolutely horrendous, and I am almost reticent to point it out precisely because it is so horrendous. You can (choke!) totally tie the logic within your class to Spring AOP by doing this:

public class SimplePojo implements Pojo {

   public void foo() {
      // this works, but... gah!
      ((Pojo) AopContext.currentProxy()).bar();
   }
   
   public void bar() {
      // some logic...
   }
}

This totally couples your code to Spring AOP, and it makes the class itself aware of the fact that it is being used in an AOP context, which flies in the face of AOP. It also requires some additional configuration when the proxy is being created:

public class Main {

   public static void main(String[] args) {
   
      ProxyFactory factory = new ProxyFactory(new SimplePojo());
      factory.adddInterface(Pojo.class);
      factory.addAdvice(new RetryAdvice());
      factory.setExposeProxy(true);

      Pojo pojo = (Pojo) factory.getProxy();

      // this is a method call on the proxy!
      pojo.foo();
   }
}

Finally, it must be noted that AspectJ does not have this self-invocation issue because it is not a proxy-based AOP framework.

6.7. Programmatic creation of @AspectJ Proxies

In addition to declaring aspects in your configuration using either <aop:config> or <aop:aspectj-autoproxy>, it is also possible programmatically to create proxies that advise target objects. For the full details of Spring's AOP API, see the next chapter. Here we want to focus on the ability to automatically create proxies using @AspectJ aspects.

The class org.springframework.aop.aspectj.annotation.AspectJProxyFactory can be used to create a proxy for a target object that is advised by one or more @AspectJ aspects. Basic usage for this class is very simple, as illustrated below. See the Javadocs for full information.

// create a factory that can generate a proxy for the given target object
AspectJProxyFactory factory = new AspectJProxyFactory(targetObject); 

// add an aspect, the class must be an @AspectJ aspect
// you can call this as many times as you need with different aspects
factory.addAspect(SecurityManager.class);

// you can also add existing aspect instances, the type of the object supplied must be an @AspectJ aspect
factory.addAspect(usageTracker);	

// now get the proxy object...
MyInterfaceType proxy = factory.getProxy();

6.8. Using AspectJ with Spring applications

Everything we've covered so far in this chapter is pure Spring AOP. In this section, we're going to look at how you can use the AspectJ compiler/weaver instead of or in addition to Spring AOP if your needs go beyond the facilities offered by Spring AOP alone.

Spring ships with a small AspectJ aspect library (it's available standalone in your distribution as spring-aspects.jar, you'll need to add this to your classpath to use the aspects in it). Section 6.8.1, “Using AspectJ to dependency inject domain objects with Spring” and Section 6.8.2, “Other Spring aspects for AspectJ” discuss the content of this library and how you can use it. Section 6.8.3, “Configuring AspectJ aspects using Spring IoC” discusses how to dependency inject AspectJ aspects that are woven using the AspectJ compiler. Finally, Section 6.8.4, “Using AspectJ Load-time weaving (LTW) with Spring applications” provides an introduction to load-time weaving for Spring applications using AspectJ.

6.8.1. Using AspectJ to dependency inject domain objects with Spring

The Spring container instantiates and configures beans defined in your application context. It is also possible to ask a bean factory to configure a pre-existing object given the name of a bean definition containing the configuration to be applied. The spring-aspects.jar contains an annotation-driven aspect that exploits this capability to allow dependency-injection of any object. The support is intended to be used for objects created outside of the control of any container. Domain objects often fall into this category: they may be created programmatically using the new operator, or by an ORM tool as a result of a database query.

The @Configurable annotation marks a class as eligible for Spring-driven configuration. In the simplest case it can be used just as a marker annotation:

package com.xyz.myapp.domain;

import org.springframework.beans.factory.annotation.Configurable;

@Configurable
public class Account {
   ...
}

When used as a marker interface in this way, Spring will configure new instances of the annotated type (Account in this case) using a prototypical bean definition with the same name as the fully-qualified type name (com.xyz.myapp.domain.Account). Since the default name for a bean is the fully-qualified name of its type, a convenient way to declare the prototype definition is simply to omit the id attribute:

<bean class="com.xyz.myapp.domain.Account" scope="prototype">
  <property name="fundsTransferService" ref="fundsTransferService"/>
  ...
</bean>

If you want to explicitly specify the name of the prototype bean definition to use, you can do so directly in the annotation:

package com.xyz.myapp.domain;

import org.springframework.beans.factory.annotation.Configurable;

@Configurable("account")
public class Account {
   ...
}

Spring will now look for a bean definition named "account" and use that as a prototypical definition to configure new Account instances.

You can also use autowiring to avoid having to specify a prototypical bean definition at all. To have Spring apply autowiring use the autowire property of the @Configurable annotation: specify either @Configurable(autowire=Autowire.BY_TYPE) or @Configurable(autowire=Autowire.BY_NAME for autowiring by type or by name respectively.

Finally you can enable Spring dependency checking for the object references in the newly created and configured object by using the dependencyCheck attribute (for example: @Configurable(autowire=Autowire.BY_NAME,dependencyCheck=true) ). If this attribute is set to true, then Spring will validate after configuration that all properties (that are not primitives or collections) have been set.

Using the annotation on its own does nothing of course. It's the AnnotationBeanConfigurerAspect in spring-aspects.jar that acts on the presence of the annotation. In essence the aspect says "after returning from the initialization of a new object of a type with the @Configurable annotation, configure the newly created object using Spring in accordance with the properties of the annotation". For this to work the annotated types must be woven with the AspectJ weaver - you can either use a build-time ant or maven task to do this (see for example the AspectJ Development Environment Guide) or load-time weaving (see Section 6.8.4, “Using AspectJ Load-time weaving (LTW) with Spring applications”).

The AnnotationBeanConfigurerAspect itself needs configuring by Spring (in order to obtain a reference to the bean factory that is to be used to configure new objects). The Spring AOP namespace defines a convenient tag for doing this. Simply include the following in your application context configuration:

<aop:spring-configured/>

If you are using the DTD instead of schema, the equivalent definition is:

<bean 
  class="org.springframework.beans.factory.aspectj.AnnotationBeanConfigurerAspect"
  factory-method="aspectOf"/>

Instances of @Configurable objects created before the aspect has been configured will result in a warning being issued to the log and no configuration of the object taking place. An example might be a bean in the Spring configuration that creates domain objects when it is initialized by Spring. In this case you can use the "depends-on" bean attribute to manually specify that the bean depends on the configuration aspect.

<bean id="myService"
  class="com.xzy.myapp.service.MyService"
  depends-on="org.springframework.beans.factory.aspectj.AnnotationBeanConfigurerAspect">
  ...
</bean>

6.8.1.1. Unit testing @Configurable objects

One of the goals of the @Configurable support is to enable independent unit testing of domain objects without the difficulties associated with hard-coded lookups. If @Configurable types have not been woven by AspectJ then the annotation has no affect during unit testing, and you can simply set mock or stub property references in the object under test and proceed as normal. If @Configurable types have been woven by AspectJ then you can still unit test outside of the container as normal, but you will see a warning message each time that you construct an @Configurable object indicating that it has not been configured by Spring.

6.8.1.2. Working with multiple application contexts

The AnnotationBeanConfigurerAspect used to implement the @Configurable support is an AspectJ singleton aspect. The scope of a singleton aspect is the same as the scope of static members, that is to say there is one aspect instance per classloader that defines the type. This means that if you define multiple application contexts within the same classloader hierarchy you need to consider where to define the <aop:spring-configured/> bean and where to place spring-aspects.jaron the classpath.

Consider a typical Spring web-app configuration with a shared parent application context defining common business services and everything needed to support them, and one child application context per servlet containing definitions particular to that servlet. All of these contexts will co-exist within the same classloader hierarchy, and so the AnnotationBeanConfigurerAspect can only hold a reference to one of them. In this case we recommend defining the <aop:spring-configured/> bean in the shared (parent) application context: this defines the services that you are likely to want to inject into domain objects. A consequence is that you cannot configure domain objects with references to beans defined in the child (servlet-specific) contexts using the @Configurable mechanism (probably not something you want to do anyway!).

When deploying multiple web-apps within the same container, ensure that each web-application loads the types in spring-aspects.jar using its own classloader (for example, by placing spring-aspects.jar in 'WEB-INF/lib'). If spring-aspects.jar is only added to the container wide classpath (and hence loaded by the shared parent classloader), all web applications will share the same aspect instance which is probably not what you want.

6.8.2. Other Spring aspects for AspectJ

In addition to the @Configurable support, spring-aspects.jar contains an AspectJ aspect that can be used to drive Spring's transaction management for types and methods annotated with the @Transactional annotation. This is primarily intended for users who want to use Spring's transaction support outside of the Spring container.

The aspect that interprets @Transactional annotations is the AnnotationTransactionAspect. When using this aspect, you must annotate the implementation class (and/or methods within that class), not the interface (if any) that the class implements. AspectJ follows Java's rule that annotations on interfaces are not inherited.

A @Transactional annotation on a class specifies the default transaction semantics for the execution of any public operation in the class.

A @Transactional annotation on a method within the class overrides the default transaction semantics given by the class annotation (if present). Methods with public, protected, and default visibility may all be annotated. Annotating protected and default visibility methods directly is the only way to get transaction demarcation for the execution of such operations.

For AspectJ programmers that want to use the Spring configuration and transaction management support but don't want to (or can't) use annotations, spring-aspects.jar also contains abstract aspects you can extend to provide your own pointcut definitions. See the Javadocs for AbstractBeanConfigurerAspect and AbstractTransactionAspect for more information. As an example, the following excerpt shows how you could write an aspect to configure all instances of objects defined in the domain model using prototypical bean definitions that match the fully-qualified class names:

public aspect DomainObjectConfiguration extends AbstractBeanConfigurerAspect {

  public DomainObjectConfiguration() {
    setBeanWiringInfoResolver(new ClassNameBeanWiringInfoResolver());
  }

  // the creation of a new bean (any object in the domain model)
  protected pointcut beanCreation(Object beanInstance) :
    initialization(new(..)) &&
    SystemArchitecture.inDomainModel() && 
    this(beanInstance);
		   		   
}

6.8.3. Configuring AspectJ aspects using Spring IoC

When using AspectJ aspects with Spring applications, it's natural to want to configure such aspects using Spring. The AspectJ runtime itself is responsible for aspect creation, and the means of configuring the AspectJ created aspects via Spring depends on the AspectJ instantiation model (per-clause) used by the aspect.

The majority of AspectJ aspects are singleton aspects. Configuration of these aspects is very easy, simply create a bean definition referencing the aspect type as normal, and include the bean attribute 'factory-method="aspectOf"'. This ensures that Spring obtains the aspect instance by asking AspectJ for it rather than trying to create an instance itself. For example:

<bean id="profiler" class="com.xyz.profiler.Profiler"
      factory-method="aspectOf">
  <property name="profilingStrategy" ref="jamonProfilingStrategy"/>
</bean>

For non-singleton aspects, the easiest way to configure them is to create prototypical bean definitions and annotate use the @Configurable support from spring-aspects.jar to configure the aspect instances once they have bean created by the AspectJ runtime.

If you have some @AspectJ aspects that you want to weave with AspectJ (for example, using load-time weaving for domain model types) and other @AspectJ aspects that you want to use with Spring AOP, and these aspects are all configured using Spring, then you'll need to tell the Spring AOP @AspectJ autoproxying support which subset of the @AspectJ aspects defined in the configuration should be used for autoproxying. You can do this by using one or more <include/> elements inside the <aop:aspectj-autoproxy/> declaration. Each include element specifies a name pattern, and only beans with names matched by at least one of the patterns will be used for Spring AOP autoproxy configuration:

<aop:aspectj-autoproxy>
  <include name="thisBean"/>
  <include name="thatBean"/>
</aop:aspectj-autoproxy>

6.8.4. Using AspectJ Load-time weaving (LTW) with Spring applications

Load-time weaving (or LTW) refers to the process of weaving AspectJ aspects with an application's class files as they are loaded into the VM. For full details on configuring load-time weaving with AspectJ, see the LTW section of the AspectJ Development Environment Guide . We will focus here on the essentials of configuring load-time weaving for Spring applications running on Java 5.

Load-time weaving is controlled by defining a file 'aop.xml' in the META-INF directory. AspectJ automatically looks for all 'META-INF/aop.xml' files visible on the classpath and configures itself based on the aggregation of their content.

A basic META-INF/aop.xml for your application should look like this:

<!DOCTYPE aspectj PUBLIC    
  "-//AspectJ//DTD//EN"    "http://www.eclipse.org/aspectj/dtd/aspectj.dtd">    

<aspectj>    
   <weaver>
	 <include within="com.xyz.myapp..*"/>
   </weaver>
</aspectj>

The <include/> element tells AspectJ which set of types should be included in the weaving process. Use the package prefix for your application followed by "..*" (meaning '... and any type defined in a subpackage of this') as a good default. Using the include element is important as otherwise AspectJ will look at every type loaded in support of your application (including all the Spring library classes and many more besides). Normally you don't want to weave these types and don't want to pay the overhead of AspectJ attempting to match against them.

To get informational messages in your log file regarding the activity of the load-time weaver, add the following options to the weaver element:

<!DOCTYPE aspectj PUBLIC    
  "-//AspectJ//DTD//EN"    "http://www.eclipse.org/aspectj/dtd/aspectj.dtd">    

<aspectj>    
   <weaver 
	 options="-showWeaveInfo
              -XmessageHandlerClass:org.springframework.aop.aspectj.AspectJWeaverMessageHandler">
	 <include within="com.xyz.myapp..*"/>
   </weaver>
</aspectj>

Finally, to control exactly which aspects are used, you can use the aspects element. By default all defined aspects are used for weaving (spring-aspects.jar contains a META-INF/aop.xml file that defines the configuration and transaction aspects). If you were using spring-aspects.jar, but only want the configuration support and not the transaction support you could specify this as follows:

<!DOCTYPE aspectj PUBLIC    
  "-//AspectJ//DTD//EN"    "http://www.eclipse.org/aspectj/dtd/aspectj.dtd">    

<aspectj>   
  <weaver 
    options="-showWeaveInfo -XmessageHandlerClass:org.springframework.aop.aspectj.AspectJWeaverMessageHandler">
    <include within="com.xyz.myapp..*"/>
  </weaver>
  <aspects>
    <include within="org.springframework.beans.factory.aspectj.AnnotationBeanConfigurerAspect"/>
  </aspects> 
</aspectj>

On the Java 5 platform, load-time weaving is enabled by specifying the following VM argument when launching the Java virtual machine:

-javaagent:<path-to-ajlibs>/aspectjweaver.jar

6.9. Further Resources

More information on AspectJ can be found at the AspectJ home page.

The book Eclipse AspectJ by Adrian Colyer et. al. (Addison-Wesley, 2005) provides a comprehensive introduction and reference for the AspectJ language.

The excellent AspectJ in Action by Ramnivas Laddad (Manning, 2003) comes highly recommended as an introduction to AOP; the focus of the book is on AspectJ, but a lot of general AOP themes are explored in some depth.

Chapter 7. Spring AOP APIs

7.1. Introduction

The previous chapter described the Spring 2.0 support for AOP using @AspectJ and schema-based aspect definitions. In this chapter we discuss the lower-level Spring AOP APIs and the AOP support used in Spring 1.2 applications. For new applications, we recommend the use of the Spring 2.0 AOP support described in the previous chapter, but when working with existing applications, or when reading books and articles, you may come across Spring 1.2 style examples. Spring 2.0 is fully backwards compatible with Spring 1.2 and everything described in this chapter is fully supported in Spring 2.0.

7.2. Pointcut API in Spring

Let's look at how Spring handles the crucial pointcut concept.

7.2.1. Concepts

Spring's pointcut model enables pointcut reuse independent of advice types. It's possible to target different advice using the same pointcut.

The org.springframework.aop.Pointcut interface is the central interface, used to target advices to particular classes and methods. The complete interface is shown below:

public interface Pointcut {

    ClassFilter getClassFilter();

    MethodMatcher getMethodMatcher();

}

Splitting the Pointcut interface into two parts allows reuse of class and method matching parts, and fine-grained composition operations (such as performing a "union" with another method matcher).

The ClassFilter interface is used to restrict the pointcut to a given set of target classes. If the matches() method always returns true, all target classes will be matched:

public interface ClassFilter {

    boolean matches(Class clazz);
}

The MethodMatcher interface is normally more important. The complete interface is shown below:

public interface MethodMatcher {

    boolean matches(Method m, Class targetClass);

    boolean isRuntime();

    boolean matches(Method m, Class targetClass, Object[] args);
}

The matches(Method, Class) method is used to test whether this pointcut will ever match a given method on a target class. This evaluation can be performed when an AOP proxy is created, to avoid the need for a test on every method invocation. If the 2-argument matches method returns true for a given method, and the isRuntime() method for the MethodMatcher returns true, the 3-argument matches method will be invoked on every method invocation. This enables a pointcut to look at the arguments passed to the method invocation immediately before the target advice is to execute.

Most MethodMatchers are static, meaning that their isRuntime() method returns false. In this case, the 3-argument matches method will never be invoked.

[Tip]Tip

If possible, try to make pointcuts static, allowing the AOP framework to cache the results of pointcut evaluation when an AOP proxy is created.

7.2.2. Operations on pointcuts

Spring supports operations on pointcuts: notably, union and intersection.

  • Union means the methods that either pointcut matches.

  • Intersection means the methods that both pointcuts match.

  • Union is usually more useful.

  • Pointcuts can be composed using the static methods in the org.springframework.aop.support.Pointcuts class, or using the ComposablePointcut class in the same package. However, using AspectJ pointcut expressions is usually a simpler approach.

7.2.3. AspectJ expression pointcuts

Since 2.0, the most important type of pointcut used by Spring is org.springframework.aop.aspectj.AspectJExpressionPointcut. This is a pointcut that uses an AspectJ supplied library to parse an AspectJ pointcut expression string.

See the previous chapter for a discussion of supported AspectJ pointcut primitives.

7.2.4. Convenience pointcut implementations

Spring provides several convenient pointcut implementations. Some can be used out of the box; others are intended to be subclassed in application-specific pointcuts.

7.2.4.1. Static pointcuts

Static pointcuts are based on method and target class, and cannot take into account the method's arguments. Static pointcuts are sufficient - and best - for most usages. It's possible for Spring to evaluate a static pointcut only once, when a method is first invoked: after that, there is no need to evaluate the pointcut again with each method invocation.

Let's consider some static pointcut implementations included with Spring.

7.2.4.1.1. Regular expression pointcuts

One obvious way to specify static pointcuts is regular expressions. Several AOP frameworks besides Spring make this possible. org.springframework.aop.support.Perl5RegexpMethodPointcut is a generic regular expression pointcut, using Perl 5 regular expression syntax. The Perl5RegexpMethodPointcut class depends on Jakarta ORO for regular expression matching. Spring also provides the JdkRegexpMethodPointcut class that uses the regular expression support in JDK 1.4+.

Using the Perl5RegexpMethodPointcut class, you can provide a list of pattern Strings. If any of these is a match, the pointcut will evaluate to true. (So the result is effectively the union of these pointcuts.)

The usage is shown below:

<bean id="settersAndAbsquatulatePointcut" 
    class="org.springframework.aop.support.Perl5RegexpMethodPointcut">
    <property name="patterns">
        <list>
            <value>.*set.*</value>
            <value>.*absquatulate</value>
        </list>
    </property>
</bean>

Spring provides a convenience class, RegexpMethodPointcutAdvisor, that allows us to also reference an Advice (remember that an Advice can be an interceptor, before advice, throws advice etc.). Behind the scenes, Spring will use the JdkRegexpMethodPointcut on J2SE 1.4 or above, and will fall back to Perl5RegexpMethodPointcut on older VMs. The use of Perl5RegexpMethodPointcut can be forced by setting the perl5 property to true. Using RegexpMethodPointcutAdvisor simplifies wiring, as the one bean encapsulates both pointcut and advice, as shown below:

<bean id="settersAndAbsquatulateAdvisor" 
    class="org.springframework.aop.support.RegexpMethodPointcutAdvisor">
    <property name="advice">
        <ref local="beanNameOfAopAllianceInterceptor"/>
    </property>
    <property name="patterns">
        <list>
            <value>.*set.*</value>
            <value>.*absquatulate</value>
        </list>
    </property>
</bean>

RegexpMethodPointcutAdvisor can be used with any Advice type.

7.2.4.1.2. Attribute-driven pointcuts

An important type of static pointcut is a metadata-driven pointcut. This uses the values of metadata attributes: typically, source-level metadata.

7.2.4.2. Dynamic pointcuts

Dynamic pointcuts are costlier to evaluate than static pointcuts. They take into account method arguments, as well as static information. This means that they must be evaluated with every method invocation; the result cannot be cached, as arguments will vary.

The main example is the control flow pointcut.

7.2.4.2.1. Control flow pointcuts

Spring control flow pointcuts are conceptually similar to AspectJ cflow pointcuts, although less powerful. (There is currently no way to specify that a pointcut executes below a join point matched by another pointcut.) A control flow pointcut matches the current call stack. For example, it might fire if the join point was invoked by a method in the com.mycompany.web package, or by the SomeCaller class. Control flow pointcuts are specified using the org.springframework.aop.support.ControlFlowPointcut class.

[Note]Note

Control flow pointcuts are significantly more expensive to evaluate at runtime than even other dynamic pointcuts. In Java 1.4, the cost is about 5 times that of other dynamic pointcuts; in Java 1.3 more than 10.

7.2.5. Pointcut superclasses

Spring provides useful pointcut superclasses to help you to implement your own pointcuts.

Because static pointcuts are most useful, you'll probably subclass StaticMethodMatcherPointcut, as shown below. This requires implementing just one abstract method (although it's possible to override other methods to customize behavior):

class TestStaticPointcut extends StaticMethodMatcherPointcut {

    public boolean matches(Method m, Class targetClass) {
        // return true if custom criteria match
    }
}

There are also superclasses for dynamic pointcuts.

You can use custom pointcuts with any advice type in Spring 1.0 RC2 and above.

7.2.6. Custom pointcuts

Because pointcuts in Spring AOP are Java classes, rather than language features (as in AspectJ) it's possible to declare custom pointcuts, whether static or dynamic. Custom pointcuts in Spring can be arbitrarily complex. However, using the AspectJ pointcut expression language is recommended if possible.

[Note]Note

Later versions of Spring may offer support for "semantic pointcuts" as offered by JAC: for example, "all methods that change instance variables in the target object."

7.3. Advice API in Spring

Let's now look at how Spring AOP handles advice.

7.3.1. Advice lifecycles

Each advice is a Spring bean. An advice instance can be shared across all advised objects, or unique to each advised object. This corresponds to per-class or per-instance advice.

Per-class advice is used most often. It is appropriate for generic advice such as transaction advisors. These do not depend on the state of the proxied object or add new state; they merely act on the method and arguments.

Per-instance advice is appropriate for introductions, to support mixins. In this case, the advice adds state to the proxied object.

It's possible to use a mix of shared and per-instance advice in the same AOP proxy.

7.3.2. Advice types in Spring

Spring provides several advice types out of the box, and is extensible to support arbitrary advice types. Let us look at the basic concepts and standard advice types.

7.3.2.1. Interception around advice

The most fundamental advice type in Spring is interception around advice.

Spring is compliant with the AOP Alliance interface for around advice using method interception. MethodInterceptors implementing around advice should implement the following interface:

public interface MethodInterceptor extends Interceptor {
  
    Object invoke(MethodInvocation invocation) throws Throwable;
}

The MethodInvocation argument to the invoke() method exposes the method being invoked; the target join point; the AOP proxy; and the arguments to the method. The invoke() method should return the invocation's result: the return value of the join point.

A simple MethodInterceptor implementation looks as follows:

public class DebugInterceptor implements MethodInterceptor {

    public Object invoke(MethodInvocation invocation) throws Throwable {
        System.out.println("Before: invocation=[" + invocation + "]");
        Object rval = invocation.proceed();
        System.out.println("Invocation returned");
        return rval;
    }
}

Note the call to the MethodInvocation's proceed() method. This proceeds down the interceptor chain towards the join point. Most interceptors will invoke this method, and return its return value. However, a MethodInterceptor, like any around advice, can return a different value or throw an exception rather than invoke the proceed method. However, you don't want to do this without good reason!

[Note]Note

MethodInterceptors offer interoperability with other AOP Alliance-compliant AOP implementations. The other advice types discussed in the remainder of this section implement common AOP concepts, but in a Spring-specific way. While there is an advantage in using the most specific advice type, stick with MethodInterceptor around advice if you are likely to want to run the aspect in another AOP framework. Note that pointcuts are not currently interoperable between frameworks, and the AOP Alliance does not currently define pointcut interfaces.

7.3.2.2. Before advice

A simpler advice type is a before advice. This does not need a MethodInvocation object, since it will only be called before entering the method.

The main advantage of a before advice is that there is no need to invoke the proceed() method, and therefore no possibility of inadvertently failing to proceed down the interceptor chain.

The MethodBeforeAdvice interface is shown below. (Spring's API design would allow for field before advice, although the usual objects apply to field interception and it's unlikely that Spring will ever implement it).

public interface MethodBeforeAdvice extends BeforeAdvice {

    void before(Method m, Object[] args, Object target) throws Throwable;
}

Note the return type is void. Before advice can insert custom behavior before the join point executes, but cannot change the return value. If a before advice throws an exception, this will abort further execution of the interceptor chain. The exception will propagate back up the interceptor chain. If it is unchecked, or on the signature of the invoked method, it will be passed directly to the client; otherwise it will be wrapped in an unchecked exception by the AOP proxy.

An example of a before advice in Spring, which counts all method invocations:

public class CountingBeforeAdvice implements MethodBeforeAdvice {

    private int count;

    public void before(Method m, Object[] args, Object target) throws Throwable {
        ++count;
    }

    public int getCount() { 
        return count; 
    }
}
[Tip]Tip

Before advice can be used with any pointcut.

7.3.2.3. Throws advice

Throws advice is invoked after the return of the join point if the join point threw an exception. Spring offers typed throws advice. Note that this means that the org.springframework.aop.ThrowsAdvice interface does not contain any methods: it is a tag interface identifying that the given object implements one or more typed throws advice methods. These should be in the form of:

afterThrowing([Method, args, target], subclassOfThrowable) 

Only the last argument is required. The method signatures may have either one or four arguments, depending on whether the advice method is interested in the method and arguments. The following classes are examples of throws advice.

The advice below is invoked if a RemoteException is thrown (including subclasses):

public class RemoteThrowsAdvice implements ThrowsAdvice {

    public void afterThrowing(RemoteException ex) throws Throwable {
        // Do something with remote exception
    }
}

The following advice is invoked if a ServletException is thrown. Unlike the above advice, it declares 4 arguments, so that it has access to the invoked method, method arguments and target object:

public class ServletThrowsAdviceWithArguments implements ThrowsAdvice {

    public void afterThrowing(Method m, Object[] args, Object target, ServletException ex) {
        // Do something with all arguments
    }
}

The final example illustrates how these two methods could be used in a single class, which handles both RemoteException and ServletException. Any number of throws advice methods can be combined in a single class.

public static class CombinedThrowsAdvice implements ThrowsAdvice {

    public void afterThrowing(RemoteException ex) throws Throwable {
        // Do something with remote exception
    }
 
    public void afterThrowing(Method m, Object[] args, Object target, ServletException ex) {
        // Do something with all arguments
    }
}
[Tip]Tip

Throws advice can be used with any pointcut.

7.3.2.4. After Returning advice

An after returning advice in Spring must implement the org.springframework.aop.AfterReturningAdvice interface, shown below:

public interface AfterReturningAdvice extends Advice {

    void afterReturning(Object returnValue, Method m, Object[] args, Object target) 
            throws Throwable;
}

An after returning advice has access to the return value (which it cannot modify), invoked method, methods arguments and target.

The following after returning advice counts all successful method invocations that have not thrown exceptions:

public class CountingAfterReturningAdvice implements AfterReturningAdvice {

    private int count;

    public void afterReturning(Object returnValue, Method m, Object[] args, Object target)
            throws Throwable {
        ++count;
    }

    public int getCount() {
        return count;
    }
}

This advice doesn't change the execution path. If it throws an exception, this will be thrown up the interceptor chain instead of the return value.

[Tip]Tip

After returning advice can be used with any pointcut.

7.3.2.5. Introduction advice

Spring treats introduction advice as a special kind of interception advice.

Introduction requires an IntroductionAdvisor, and an IntroductionInterceptor, implementing the following interface:

public interface IntroductionInterceptor extends MethodInterceptor {

    boolean implementsInterface(Class intf);
}

The invoke() method inherited from the AOP Alliance MethodInterceptor interface must implement the introduction: that is, if the invoked method is on an introduced interface, the introduction interceptor is responsible for handling the method call - it cannot invoke proceed().

Introduction advice cannot be used with any pointcut, as it applies only at class, rather than method, level. You can only use introduction advice with the IntroductionAdvisor, which has the following methods:

public interface IntroductionAdvisor extends Advisor, IntroductionInfo {

	ClassFilter getClassFilter();

	void validateInterfaces() throws IllegalArgumentException;
}

public interface IntroductionInfo {

	Class[] getInterfaces();
}

There is no MethodMatcher, and hence no Pointcut, associated with introduction advice. Only class filtering is logical.

The getInterfaces() method returns the interfaces introduced by this advisor.

The validateInterfaces() method is used internally to see whether or not the introduced interfaces can be implemented by the configured IntroductionInterceptor .

Let's look at a simple example from the Spring test suite. Let's suppose we want to introduce the following interface to one or more objects:

public interface Lockable {
    void lock();
    void unlock();
    boolean locked();
}

This illustrates a mixin. We want to be able to cast advised objects to Lockable, whatever their type, and call lock and unlock methods. If we call the lock() method, we want all setter methods to throw a LockedException. Thus we can add an aspect that provides the ability to make objects immutable, without them having any knowledge of it: a good example of AOP.

Firstly, we'll need an IntroductionInterceptor that does the heavy lifting. In this case, we extend the org.springframework.aop.support.DelegatingIntroductionInterceptor convenience class. We could implement IntroductionInterceptor directly, but using DelegatingIntroductionInterceptor is best for most cases.

The DelegatingIntroductionInterceptor is designed to delegate an introduction to an actual implementation of the introduced interface(s), concealing the use of interception to do so. The delegate can be set to any object using a constructor argument; the default delegate (when the no-arg constructor is used) is this. Thus in the example below, the delegate is the LockMixin subclass of DelegatingIntroductionInterceptor. Given a delegate (by default itself), a DelegatingIntroductionInterceptor instance looks for all interfaces implemented by the delegate (other than IntroductionInterceptor), and will support introductions against any of them. It's possible for subclasses such as LockMixin to call the suppressInterface(Class intf) method to suppress interfaces that should not be exposed. However, no matter how many interfaces an IntroductionInterceptor is prepared to support, the IntroductionAdvisor used will control which interfaces are actually exposed. An introduced interface will conceal any implementation of the same interface by the target.

Thus LockMixin subclasses DelegatingIntroductionInterceptor and implements Lockable itself. The superclass automatically picks up that Lockable can be supported for introduction, so we don't need to specify that. We could introduce any number of interfaces in this way.

Note the use of the locked instance variable. This effectively adds additional state to that held in the target object.

public class LockMixin extends DelegatingIntroductionInterceptor 
    implements Lockable {

    private boolean locked;

    public void lock() {
        this.locked = true;
    }

    public void unlock() {
        this.locked = false;
    }

    public boolean locked() {
        return this.locked;
    }

    public Object invoke(MethodInvocation invocation) throws Throwable {
        if (locked() && invocation.getMethod().getName().indexOf("set") == 0)
            throw new LockedException();
        return super.invoke(invocation);
    }

}

Often it isn't necessary to override the invoke() method: the DelegatingIntroductionInterceptor implementation - which calls the delegate method if the method is introduced, otherwise proceeds towards the join point - is usually sufficient. In the present case, we need to add a check: no setter method can be invoked if in locked mode.

The introduction advisor required is simple. All it needs to do is hold a distinct LockMixin instance, and specify the introduced interfaces - in this case, just Lockable. A more complex example might take a reference to the introduction interceptor (which would be defined as a prototype): in this case, there's no configuration relevant for a LockMixin, so we simply create it using new.

public class LockMixinAdvisor extends DefaultIntroductionAdvisor {

    public LockMixinAdvisor() {
        super(new LockMixin(), Lockable.class);
    }
}

We can apply this advisor very simply: it requires no configuration. (However, it is necessary: It's impossible to use an IntroductionInterceptor without an IntroductionAdvisor.) As usual with introductions, the advisor must be per-instance, as it is stateful. We need a different instance of LockMixinAdvisor, and hence LockMixin, for each advised object. The advisor comprises part of the advised object's state.

We can apply this advisor programmatically, using the Advised.addAdvisor() method, or (the recommended way) in XML configuration, like any other advisor. All proxy creation choices discussed below, including "auto proxy creators," correctly handle introductions and stateful mixins.

7.4. Advisor API in Spring

In Spring, an Advisor is an aspect that contains just a single advice object associated with a pointcut expression.

Apart from the special case of introductions, any advisor can be used with any advice. org.springframework.aop.support.DefaultPointcutAdvisor is the most commonly used advisor class. For example, it can be used with a MethodInterceptor, BeforeAdvice or ThrowsAdvice.

It is possible to mix advisor and advice types in Spring in the same AOP proxy. For example, you could use a interception around advice, throws advice and before advice in one proxy configuration: Spring will automatically create the necessary interceptor chain.

7.5. Using the ProxyFactoryBean to create AOP proxies

If you're using the Spring IoC container (an ApplicationContext or BeanFactory) for your business objects - and you should be! - you will want to use one of Spring's AOP FactoryBeans. (Remember that a factory bean introduces a layer of indirection, enabling it to create objects of a different type.)

[Note]Note

The Spring 2.0 AOP support also uses factory beans under the covers.

The basic way to create an AOP proxy in Spring is to use the org.springframework.aop.framework.ProxyFactoryBean. This gives complete control over the pointcuts and advice that will apply, and their ordering. However, there are simpler options that are preferable if you don't need such control.

7.5.1. Basics

The ProxyFactoryBean, like other Spring FactoryBean implementations, introduces a level of indirection. If you define a ProxyFactoryBean with name foo, what objects referencing foo see is not the ProxyFactoryBean instance itself, but an object created by the ProxyFactoryBean's implementation of the getObject() method. This method will create an AOP proxy wrapping a target object.

One of the most important benefits of using a ProxyFactoryBean or another IoC-aware class to create AOP proxies, is that it means that advices and pointcuts can also be managed by IoC. This is a powerful feature, enabling certain approaches that are hard to achieve with other AOP frameworks. For example, an advice may itself reference application objects (besides the target, which should be available in any AOP framework), benefiting from all the pluggability provided by Dependency Injection.

7.5.2. JavaBean properties

In common with most FactoryBean implementations provided with Spring, the ProxyFactoryBean class is itself a JavaBean. Its properties are used to:

Some key properties are inherited from org.springframework.aop.framework.ProxyConfig (the superclass for all AOP proxy factories in Spring). These key properties include:

  • proxyTargetClass: true if the target class is to be proxied, rather than the target class' interfaces. If this property value is set to true, then CGLIB proxies will be created (but see also below the section entitled Section 7.5.3, “JDK- and CGLIB-based proxies”).

  • optimize: controls whether or not aggressive optimizations are applied to proxies created via CGLIB. One should not blithely use this setting unless one fully understands how the relevant AOP proxy handles optimization. This is currently used only for CGLIB proxies; it has no effect with JDK dynamic proxies.

  • frozen: if a proxy configuration is frozen, then changes to the configuration are no longer allowed. This is useful both as a slight optimization and for those cases when you don't want callers to be able to manipulate the proxy (via the Advised interface) after the proxy has been created. The default value of this property is false, so changes such as adding additional advice are allowed.

  • exposeProxy: determines whether or not the current proxy should be exposed in a ThreadLocal so that it can be accessed by the target. If a target needs to obtain the proxy and the exposeProxy property is set to true, the target can use the AopContext.currentProxy() method.

  • aopProxyFactory: the implementation of AopProxyFactory to use. Offers a way of customizing whether to use dynamic proxies, CGLIB or any other proxy strategy. The default implementation will choose dynamic proxies or CGLIB appropriately. There should be no need to use this property; it is intended to allow the addition of new proxy types in Spring 1.1.

Other properties specific to ProxyFactoryBean include:

  • proxyInterfaces: array of String interface names. If this isn't supplied, a CGLIB proxy for the target class will be used (but see also below the section entitled Section 7.5.3, “JDK- and CGLIB-based proxies”).

  • interceptorNames: String array of Advisor, interceptor or other advice names to apply. Ordering is significant, on a first come-first served basis. That is to say that the first interceptor in the list will be the first to be able to intercept the invocation.

    The names are bean names in the current factory, including bean names from ancestor factories. You can't mention bean references here since doing so would result in the ProxyFactoryBean ignoring the singleton setting of the advice.

    You can append an interceptor name with an asterisk (*). This will result in the application of all advisor beans with names starting with the part before the asterisk to be applied. An example of using this feature can be found in Section 7.5.6, “Using 'global' advisors”.

  • singleton: whether or not the factory should return a single object, no matter how often the getObject() method is called. Several FactoryBean implementations offer such a method. The default value is true. If you want to use stateful advice - for example, for stateful mixins - use prototype advices along with a singleton value of false.

7.5.3. JDK- and CGLIB-based proxies

This section serves as the definitive documentation on how the ProxyFactoryBean chooses to create one of either a JDK- and CGLIB-based proxy for a particular target object (that is to be proxied).

[Note]Note

The behavior of the ProxyFactoryBean with regard to creating JDK- or CGLIB-based proxies changed between versions 1.2.x and 2.0 of Spring. The ProxyFactoryBean now exhibits similar semantics with regard to auto-detecting interfaces as those of the TransactionProxyFactoryBean class.

If the class of a target object that is to be proxied (hereafter simply referred to as the target class) doesn't implement any interfaces, then a CGLIB-based proxy will be created. This is the easiest scenario, because JDK proxies are interface based, and no interfaces means JDK proxying isn't even possible. One simply plugs in the target bean, and specifies the list of interceptors via the interceptorNames property. Note that a CGLIB-based proxy will be created even if the proxyTargetClass property of the ProxyFactoryBean has been set to false. (Obviously this makes no sense, and is best removed from the bean definition because it is at best redundant, and at worst confusing.)

If the target class implements one (or more) interfaces, then the type of proxy that is created depends on the configuration of the ProxyFactoryBean.

If the proxyTargetClass property of the ProxyFactoryBean has been set to true, then a CGLIB-based proxy will be created. This makes sense, and is in keeping with the principle of least surprise. Even if the proxyInterfaces property of the ProxyFactoryBean has been set to one or more fully qualified interface names, the fact that the proxyTargetClass property is set to true will cause CGLIB-based proxying to be in effect.

If the proxyInterfaces property of the ProxyFactoryBean has been set to one or more fully qualified interface names, then a JDK-based proxy will be created. The created proxy will implement all of the interfaces that were specified in the proxyInterfaces property; if the target class happens to implement a whole lot more interfaces than those specified in the proxyInterfaces property, that is all well and good but those additional interfaces will not be implemented by the returned proxy.

If the proxyInterfaces property of the ProxyFactoryBean has not been set, but the target class does implement one (or more) interfaces, then the ProxyFactoryBean will auto-detect the fact that the target class does actually implement at least one interface, and a JDK-based proxy will be created. The interfaces that are actually proxied will be all of the interfaces that the target class implements; in effect, this is the same as simply supplying a list of each and every interface that the target class implements to the proxyInterfaces property. However, it is significantly less work, and less prone to typos.

7.5.4. Proxying interfaces

Let's look at a simple example of ProxyFactoryBean in action. This example involves:

  • A target bean that will be proxied. This is the "personTarget" bean definition in the example below.

  • An Advisor and an Interceptor used to provide advice.

  • An AOP proxy bean definition specifying the target object (the personTarget bean) and the interfaces to proxy, along with the advices to apply.

<bean id="personTarget" class="com.mycompany.PersonImpl">
    <property name="name"><value>Tony</value></property>
    <property name="age"><value>51</value></property>
</bean>

<bean id="myAdvisor" class="com.mycompany.MyAdvisor">
    <property name="someProperty"><value>Custom string property value</value></property>
</bean>

<bean id="debugInterceptor" class="org.springframework.aop.interceptor.DebugInterceptor">
</bean>

<bean id="person" 
    class="org.springframework.aop.framework.ProxyFactoryBean">
    <property name="proxyInterfaces"><value>com.mycompany.Person</value></property>

    <property name="target"><ref local="personTarget"/></property>
    <property name="interceptorNames">
        <list>
            <value>myAdvisor</value>
            <value>debugInterceptor</value>
        </list>
    </property>
</bean>

Note that the interceptorNames property takes a list of String: the bean names of the interceptor or advisors in the current factory. Advisors, interceptors, before, after returning and throws advice objects can be used. The ordering of advisors is significant.

[Note]Note

You might be wondering why the list doesn't hold bean references. The reason for this is that if the ProxyFactoryBean's singleton property is set to false, it must be able to return independent proxy instances. If any of the advisors is itself a prototype, an independent instance would need to be returned, so it's necessary to be able to obtain an instance of the prototype from the factory; holding a reference isn't sufficient.

The "person" bean definition above can be used in place of a Person implementation, as follows:

Person person = (Person) factory.getBean("person");

Other beans in the same IoC context can express a strongly typed dependency on it, as with an ordinary Java object:

<bean id="personUser" class="com.mycompany.PersonUser">
  <property name="person"><ref local="person" /></property>
</bean>

The PersonUser class in this example would expose a property of type Person. As far as it's concerned, the AOP proxy can be used transparently in place of a "real" person implementation. However, its class would be a dynamic proxy class. It would be possible to cast it to the Advised interface (discussed below).

It's possible to conceal the distinction between target and proxy using an anonymous inner bean, as follows. Only the ProxyFactoryBean definition is different; the advice is included only for completeness:

<bean id="myAdvisor" class="com.mycompany.MyAdvisor">
  <property name="someProperty"><value>Custom string property value</value></property>
</bean>

<bean id="debugInterceptor" class="org.springframework.aop.interceptor.DebugInterceptor"/>

<bean id="person" class="org.springframework.aop.framework.ProxyFactoryBean">
  <property name="proxyInterfaces"><value>com.mycompany.Person</value></property>
  <!-- Use inner bean, not local reference to target -->
  <property name="target">
    <bean class="com.mycompany.PersonImpl">
      <property name="name"><value>Tony</value></property>
      <property name="age"><value>51</value></property>
    </bean>
  </property>
  <property name="interceptorNames">
    <list>
      <value>myAdvisor</value>
      <value>debugInterceptor</value>
    </list>
  </property>
</bean>

This has the advantage that there's only one object of type Person: useful if we want to prevent users of the application context from obtaining a reference to the un-advised object, or need to avoid any ambiguity with Spring IoC autowiring. There's also arguably an advantage in that the ProxyFactoryBean definition is self-contained. However, there are times when being able to obtain the un-advised target from the factory might actually be an advantage: for example, in certain test scenarios.

7.5.5. Proxying classes

What if you need to proxy a class, rather than one or more interfaces?

Imagine that in our example above, there was no Person interface: we needed to advise a class called Person that didn't implement any business interface. In this case, you can configure Spring to use CGLIB proxying, rather than dynamic proxies. Simply set the proxyTargetClass property on the ProxyFactoryBean above to true. While it's best to program to interfaces, rather than classes, the ability to advise classes that don't implement interfaces can be useful when working with legacy code. (In general, Spring isn't prescriptive. While it makes it easy to apply good practices, it avoids forcing a particular approach.)

If you want to, you can force the use of CGLIB in any case, even if you do have interfaces.

CGLIB proxying works by generating a subclass of the target class at runtime. Spring configures this generated subclass to delegate method calls to the original target: the subclass is used to implement the Decorator pattern, weaving in the advice.

CGLIB proxying should generally be transparent to users. However, there are some issues to consider:

  • Final methods can't be advised, as they can't be overridden.

  • You'll need the CGLIB 2 binaries on your classpath; dynamic proxies are available with the JDK.

There's little performance difference between CGLIB proxying and dynamic proxies. As of Spring 1.0, dynamic proxies are slightly faster. However, this may change in the future. Performance should not be a decisive consideration in this case.

7.5.6. Using 'global' advisors

By appending an asterisk to an interceptor name, all advisors with bean names matching the part before the asterisk, will be added to the advisor chain. This can come in handy if you need to add a standard set of 'global' advisors:

<bean id="proxy" class="org.springframework.aop.framework.ProxyFactoryBean">
  <property name="target" ref="service"/>
  <property name="interceptorNames">
    <list>
      <value>global*</value>
    </list>
  </property>
</bean>

<bean id="global_debug" class="org.springframework.aop.interceptor.DebugInterceptor"/>
<bean id="global_performance" class="org.springframework.aop.interceptor.PerformanceMonitorInterceptor"/>

7.6. Concise proxy definitions

Especially when defining transactional proxies, you may end up with many similar proxy definitions. The use of parent and child bean definitions, along with inner bean definitions, can result in much cleaner and more concise proxy definitions.

First a parent, template, bean definition is created for the proxy:

<bean id="txProxyTemplate" abstract="true"
        class="org.springframework.transaction.interceptor.TransactionProxyFactoryBean">
  <property name="transactionManager" ref="transactionManager"/>
  <property name="transactionAttributes">
    <props>
      <prop key="*">PROPAGATION_REQUIRED</prop>
    </props>
  </property>
</bean>

This will never be instantiated itself, so may actually be incomplete. Then each proxy which needs to be created is just a child bean definition, which wraps the target of the proxy as an inner bean definition, since the target will never be used on its own anyway.

<bean id="myService" parent="txProxyTemplate">
  <property name="target">
    <bean class="org.springframework.samples.MyServiceImpl">
    </bean>
  </property>
</bean>

It is of course possible to override properties from the parent template, such as in this case, the transaction propagation settings:

<bean id="mySpecialService" parent="txProxyTemplate">
  <property name="target">
    <bean class="org.springframework.samples.MySpecialServiceImpl">
    </bean>
  </property>
  <property name="transactionAttributes">
    <props>
      <prop key="get*">PROPAGATION_REQUIRED,readOnly</prop>
      <prop key="find*">PROPAGATION_REQUIRED,readOnly</prop>
      <prop key="load*">PROPAGATION_REQUIRED,readOnly</prop>
      <prop key="store*">PROPAGATION_REQUIRED</prop>
    </props>
  </property>
</bean>

Note that in the example above, we have explicitly marked the parent bean definition as abstract by using the abstract attribute, as described previously, so that it may not actually ever be instantiated. Application contexts (but not simple bean factories) will by default pre-instantiate all singletons. It is therefore important (at least for singleton beans) that if you have a (parent) bean definition which you intend to use only as a template, and this definition specifies a class, you must make sure to set the abstract attribute to true, otherwise the application context will actually try to pre-instantiate it.

7.7. Creating AOP proxies programmatically with the ProxyFactory

It's easy to create AOP proxies programmatically using Spring. This enables you to use Spring AOP without dependency on Spring IoC.

The following listing shows creation of a proxy for a target object, with one interceptor and one advisor. The interfaces implemented by the target object will automatically be proxied:

ProxyFactory factory = new ProxyFactory(myBusinessInterfaceImpl);
factory.addInterceptor(myMethodInterceptor);
factory.addAdvisor(myAdvisor);
MyBusinessInterface tb = (MyBusinessInterface) factory.getProxy();

The first step is to construct an object of type org.springframework.aop.framework.ProxyFactory. You can create this with a target object, as in the above example, or specify the interfaces to be proxied in an alternate constructor.

You can add interceptors or advisors, and manipulate them for the life of the ProxyFactory. If you add an IntroductionInterceptionAroundAdvisor you can cause the proxy to implement additional interfaces.

There are also convenience methods on ProxyFactory (inherited from AdvisedSupport) which allow you to add other advice types such as before and throws advice. AdvisedSupport is the superclass of both ProxyFactory and ProxyFactoryBean.

[Tip]Tip

Integrating AOP proxy creation with the IoC framework is best practice in most applications. We recommend that you externalize configuration from Java code with AOP, as in general.

7.8. Manipulating advised objects

However you create AOP proxies, you can manipulate them using the org.springframework.aop.framework.Advised interface. Any AOP proxy can be cast to this interface, whichever other interfaces it implements. This interface includes the following methods:

Advisor[] getAdvisors();

void addAdvice(Advice advice) throws AopConfigException;

void addAdvice(int pos, Advice advice) 
        throws AopConfigException;

void addAdvisor(Advisor advisor) throws AopConfigException;

void addAdvisor(int pos, Advisor advisor) throws AopConfigException;

int indexOf(Advisor advisor);

boolean removeAdvisor(Advisor advisor) throws AopConfigException;

void removeAdvisor(int index) throws AopConfigException;

boolean replaceAdvisor(Advisor a, Advisor b) throws AopConfigException;

boolean isFrozen();

The getAdvisors() method will return an Advisor for every advisor, interceptor or other advice type that has been added to the factory. If you added an Advisor, the returned advisor at this index will be the object that you added. If you added an interceptor or other advice type, Spring will have wrapped this in an advisor with a pointcut that always returns true. Thus if you added a MethodInterceptor, the advisor returned for this index will be an DefaultPointcutAdvisor returning your MethodInterceptor and a pointcut that matches all classes and methods.

The addAdvisor() methods can be used to add any Advisor. Usually the advisor holding pointcut and advice will be the generic DefaultPointcutAdvisor, which can be used with any advice or pointcut (but not for introductions).

By default, it's possible to add or remove advisors or interceptors even once a proxy has been created. The only restriction is that it's impossible to add or remove an introduction advisor, as existing proxies from the factory will not show the interface change. (You can obtain a new proxy from the factory to avoid this problem.)

A simple example of casting an AOP proxy to the Advised interface and examining and manipulating its advice:

Advised advised = (Advised) myObject;
Advisor[] advisors = advised.getAdvisors();
int oldAdvisorCount = advisors.length;
System.out.println(oldAdvisorCount + " advisors");

// Add an advice like an interceptor without a pointcut
// Will match all proxied methods
// Can use for interceptors, before, after returning or throws advice
advised.addAdvice(new DebugInterceptor());

// Add selective advice using a pointcut
advised.addAdvisor(new DefaultPointcutAdvisor(mySpecialPointcut, myAdvice));

assertEquals("Added two advisors",
     oldAdvisorCount + 2, advised.getAdvisors().length);
[Note]Note

It's questionable whether it's advisable (no pun intended) to modify advice on a business object in production, although there are no doubt legitimate usage cases. However, it can be very useful in development: for example, in tests. I have sometimes found it very useful to be able to add test code in the form of an interceptor or other advice, getting inside a method invocation I want to test. (For example, the advice can get inside a transaction created for that method: for example, to run SQL to check that a database was correctly updated, before marking the transaction for roll back.)

Depending on how you created the proxy, you can usually set a frozen flag, in which case the Advised isFrozen() method will return true, and any attempts to modify advice through addition or removal will result in an AopConfigException. The ability to freeze the state of an advised object is useful in some cases, for example, to prevent calling code removing a security interceptor. It may also be used in Spring 1.1 to allow aggressive optimization if runtime advice modification is known not to be required.

7.9. Using the "autoproxy" facility

So far we've considered explicit creation of AOP proxies using a ProxyFactoryBean or similar factory bean.

Spring also allows us to use "autoproxy" bean definitions, which can automatically proxy selected bean definitions. This is built on Spring "bean post processor" infrastructure, which enables modification of any bean definition as the container loads.

In this model, you set up some special bean definitions in your XML bean definition file to configure the auto proxy infrastructure. This allows you just to declare the targets eligible for autoproxying: you don't need to use ProxyFactoryBean.

There are two ways to do this:

  • Using an autoproxy creator that refers to specific beans in the current context.

  • A special case of autoproxy creation that deserves to be considered separately; autoproxy creation driven by source-level metadata attributes.

7.9.1. Autoproxy bean definitions

The org.springframework.aop.framework.autoproxy package provides the following standard autoproxy creators.

7.9.1.1. BeanNameAutoProxyCreator

The BeanNameAutoProxyCreator automatically creates AOP proxies for beans with names matching literal values or wildcards.

<bean class="org.springframework.aop.framework.autoproxy.BeanNameAutoProxyCreator">
  <property name="beanNames"><value>jdk*,onlyJdk</value></property>
  <property name="interceptorNames">
    <list>
      <value>myInterceptor</value>
    </list>
  </property>
</bean>

As with ProxyFactoryBean, there is an interceptorNames property rather than a list of interceptors, to allow correct behavior for prototype advisors. Named "interceptors" can be advisors or any advice type.

As with auto proxying in general, the main point of using BeanNameAutoProxyCreator is to apply the same configuration consistently to multiple objects, with minimal volume of configuration. It is a popular choice for applying declarative transactions to multiple objects.

Bean definitions whose names match, such as "jdkMyBean" and "onlyJdk" in the above example, are plain old bean definitions with the target class. An AOP proxy will be created automatically by the BeanNameAutoProxyCreator. The same advice will be applied to all matching beans. Note that if advisors are used (rather than the interceptor in the above example), the pointcuts may apply differently to different beans.

7.9.1.2. DefaultAdvisorAutoProxyCreator

A more general and extremely powerful auto proxy creator is DefaultAdvisorAutoProxyCreator. This will automagically apply eligible advisors in the current context, without the need to include specific bean names in the autoproxy advisor's bean definition. It offers the same merit of consistent configuration and avoidance of duplication as BeanNameAutoProxyCreator.

Using this mechanism involves:

  • Specifying a DefaultAdvisorAutoProxyCreator bean definition.

  • Specifying any number of Advisors in the same or related contexts. Note that these must be Advisors, not just interceptors or other advices. This is necessary because there must be a pointcut to evaluate, to check the eligibility of each advice to candidate bean definitions.

The DefaultAdvisorAutoProxyCreator will automatically evaluate the pointcut contained in each advisor, to see what (if any) advice it should apply to each business object (such as "businessObject1" and "businessObject2" in the example).

This means that any number of advisors can be applied automatically to each business object. If no pointcut in any of the advisors matches any method in a business object, the object will not be proxied. As bean definitions are added for new business objects, they will automatically be proxied if necessary.

Autoproxying in general has the advantage of making it impossible for callers or dependencies to obtain an un-advised object. Calling getBean("businessObject1") on this ApplicationContext will return an AOP proxy, not the target business object. (The "inner bean" idiom shown earlier also offers this benefit.)

<bean class="org.springframework.aop.framework.autoproxy.DefaultAdvisorAutoProxyCreator"/>

<bean class="org.springframework.transaction.interceptor.TransactionAttributeSourceAdvisor">
  <property name="transactionInterceptor" ref="transactionInterceptor"/>
</bean>

<bean id="customAdvisor" class="com.mycompany.MyAdvisor"/>

<bean id="businessObject1" class="com.mycompany.BusinessObject1">
  <!-- Properties omitted -->
</bean>

<bean id="businessObject2" class="com.mycompany.BusinessObject2"/>

The DefaultAdvisorAutoProxyCreator is very useful if you want to apply the same advice consistently to many business objects. Once the infrastructure definitions are in place, you can simply add new business objects without including specific proxy configuration. You can also drop in additional aspects very easily - for example, tracing or performance monitoring aspects - with minimal change to configuration.

The DefaultAdvisorAutoProxyCreator offers support for filtering (using a naming convention so that only certain advisors are evaluated, allowing use of multiple, differently configured, AdvisorAutoProxyCreators in the same factory) and ordering. Advisors can implement the org.springframework.core.Ordered interface to ensure correct ordering if this is an issue. The TransactionAttributeSourceAdvisor used in the above example has a configurable order value; the default setting is unordered.

7.9.1.3. AbstractAdvisorAutoProxyCreator

This is the superclass of DefaultAdvisorAutoProxyCreator. You can create your own autoproxy creators by subclassing this class, in the unlikely event that advisor definitions offer insufficient customization to the behavior of the framework DefaultAdvisorAutoProxyCreator.

7.9.2. Using metadata-driven auto-proxying

A particularly important type of autoproxying is driven by metadata. This produces a similar programming model to .NET ServicedComponents. Instead of using XML deployment descriptors as in EJB, configuration for transaction management and other enterprise services is held in source-level attributes.

In this case, you use the DefaultAdvisorAutoProxyCreator, in combination with Advisors that understand metadata attributes. The metadata specifics are held in the pointcut part of the candidate advisors, rather than in the autoproxy creation class itself.

This is really a special case of the DefaultAdvisorAutoProxyCreator, but deserves consideration on its own. (The metadata-aware code is in the pointcuts contained in the advisors, not the AOP framework itself.)

The /attributes directory of the JPetStore sample application shows the use of attribute-driven autoproxying. In this case, there's no need to use the TransactionProxyFactoryBean. Simply defining transactional attributes on business objects is sufficient, because of the use of metadata-aware pointcuts. The bean definitions include the following code, in /WEB-INF/declarativeServices.xml. Note that this is generic, and can be used outside the JPetStore:

<bean class="org.springframework.aop.framework.autoproxy.DefaultAdvisorAutoProxyCreator"/>

<bean class="org.springframework.transaction.interceptor.TransactionAttributeSourceAdvisor">
  <property name="transactionInterceptor" ref="transactionInterceptor"/>
</bean>

<bean id="transactionInterceptor"
    class="org.springframework.transaction.interceptor.TransactionInterceptor">
  <property name="transactionManager" ref="transactionManager"/>
  <property name="transactionAttributeSource">
    <bean class="org.springframework.transaction.interceptor.AttributesTransactionAttributeSource">
      <property name="attributes" ref="attributes"/>
    </bean>
  </property>
</bean>

<bean id="attributes" class="org.springframework.metadata.commons.CommonsAttributes"/>

The DefaultAdvisorAutoProxyCreator bean definition (the name is not significant, hence it can even be omitted) will pick up all eligible pointcuts in the current application context. In this case, the "transactionAdvisor" bean definition, of type TransactionAttributeSourceAdvisor, will apply to classes or methods carrying a transaction attribute. The TransactionAttributeSourceAdvisor depends on a TransactionInterceptor, via constructor dependency. The example resolves this via autowiring. The AttributesTransactionAttributeSource depends on an implementation of the org.springframework.metadata.Attributes interface. In this fragment, the "attributes" bean satisfies this, using the Jakarta Commons Attributes API to obtain attribute information. (The application code must have been compiled using the Commons Attributes compilation task.)

The /annotation directory of the JPetStore sample application contains an analogous example for auto-proxying driven by JDK 1.5+ annotations. The following configuration enables automatic detection of Spring's Transactional annotation, leading to implicit proxies for beans containing that annotation:

<bean class="org.springframework.aop.framework.autoproxy.DefaultAdvisorAutoProxyCreator"/>

<bean class="org.springframework.transaction.interceptor.TransactionAttributeSourceAdvisor">
  <property name="transactionInterceptor" ref="transactionInterceptor"/>
</bean>

<bean id="transactionInterceptor"
    class="org.springframework.transaction.interceptor.TransactionInterceptor">
  <property name="transactionManager" ref="transactionManager"/>
  <property name="transactionAttributeSource">
    <bean class="org.springframework.transaction.annotation.AnnotationTransactionAttributeSource"/>
  </property>
</bean>

The TransactionInterceptor defined here depends on a PlatformTransactionManager definition, which is not included in this generic file (although it could be) because it will be specific to the application's transaction requirements (typically JTA, as in this example, or Hibernate, JDO or JDBC):

<bean id="transactionManager" 
    class="org.springframework.transaction.jta.JtaTransactionManager"/>
[Tip]Tip

If you require only declarative transaction management, using these generic XML definitions will result in Spring automatically proxying all classes or methods with transaction attributes. You won't need to work directly with AOP, and the programming model is similar to that of .NET ServicedComponents.

This mechanism is extensible. It's possible to do autoproxying based on custom attributes. You need to:

  • Define your custom attribute.

  • Specify an Advisor with the necessary advice, including a pointcut that is triggered by the presence of the custom attribute on a class or method. You may be able to use an existing advice, merely implementing a static pointcut that picks up the custom attribute.

It's possible for such advisors to be unique to each advised class (for example, mixins): they simply need to be defined as prototype, rather than singleton, bean definitions. For example, the LockMixin introduction interceptor from the Spring test suite, shown above, could be used in conjunction with an attribute-driven pointcut to target a mixin, as shown here. We use the generic DefaultPointcutAdvisor, configured using JavaBean properties:

<bean id="lockMixin" class="org.springframework.aop.LockMixin"
    scope="prototype"/>

<bean id="lockableAdvisor" class="org.springframework.aop.support.DefaultPointcutAdvisor"
    scope="prototype">
  <property name="pointcut" ref="myAttributeAwarePointcut"/>
  <property name="advice" ref="lockMixin"/>
</bean>

<bean id="anyBean" class="anyclass" ...

If the attribute aware pointcut matches any methods in the anyBean or other bean definitions, the mixin will be applied. Note that both lockMixin and lockableAdvisor definitions are prototypes. The myAttributeAwarePointcut pointcut can be a singleton definition, as it doesn't hold state for individual advised objects.

7.10. Using TargetSources

Spring offers the concept of a TargetSource, expressed in the org.springframework.aop.TargetSource interface. This interface is responsible for returning the "target object" implementing the join point. The TargetSource implementation is asked for a target instance each time the AOP proxy handles a method invocation.

Developers using Spring AOP don't normally need to work directly with TargetSources, but this provides a powerful means of supporting pooling, hot swappable and other sophisticated targets. For example, a pooling TargetSource can return a different target instance for each invocation, using a pool to manage instances.

If you do not specify a TargetSource, a default implementation is used that wraps a local object. The same target is returned for each invocation (as you would expect).

Let's look at the standard target sources provided with Spring, and how you can use them.

[Tip]Tip

When using a custom target source, your target will usually need to be a prototype rather than a singleton bean definition. This allows Spring to create a new target instance when required.

7.10.1. Hot swappable target sources

The org.springframework.aop.target.HotSwappableTargetSource exists to allow the target of an AOP proxy to be switched while allowing callers to keep their references to it.

Changing the target source's target takes effect immediately. The HotSwappableTargetSource is threadsafe.

You can change the target via the swap() method on HotSwappableTargetSource as follows:

HotSwappableTargetSource swapper = 
    (HotSwappableTargetSource) beanFactory.getBean("swapper");
Object oldTarget = swapper.swap(newTarget);

The XML definitions required look as follows:

<bean id="initialTarget" class="mycompany.OldTarget"/>

<bean id="swapper" class="org.springframework.aop.target.HotSwappableTargetSource">
  <constructor-arg ref="initialTarget"/>
</bean>

<bean id="swappable" class="org.springframework.aop.framework.ProxyFactoryBean">
  <property name="targetSource" ref="swapper"/>
</bean>

The above swap() call changes the target of the swappable bean. Clients who hold a reference to that bean will be unaware of the change, but will immediately start hitting the new target.

Although this example doesn't add any advice - and it's not necessary to add advice to use a TargetSource - of course any TargetSource can be used in conjunction with arbitrary advice.

7.10.2. Pooling target sources

Using a pooling target source provides a similar programming model to stateless session EJBs, in which a pool of identical instances is maintained, with method invocations going to free objects in the pool.

A crucial difference between Spring pooling and SLSB pooling is that Spring pooling can be applied to any POJO. As with Spring in general, this service can be applied in a non-invasive way.

Spring provides out-of-the-box support for Jakarta Commons Pool 1.3, which provides a fairly efficient pooling implementation. You'll need the commons-pool Jar on your application's classpath to use this feature. It's also possible to subclass org.springframework.aop.target.AbstractPoolingTargetSource to support any other pooling API.

Sample configuration is shown below:

<bean id="businessObjectTarget" class="com.mycompany.MyBusinessObject" 
    scope="prototype">
  ... properties omitted
</bean>

<bean id="poolTargetSource" class="org.springframework.aop.target.CommonsPoolTargetSource">
  <property name="targetBeanName" value="businessObjectTarget"/>
  <property name="maxSize" value="25"/>
</bean>

<bean id="businessObject" class="org.springframework.aop.framework.ProxyFactoryBean">
  <property name="targetSource" ref="poolTargetSource"/>
  <property name="interceptorNames" value="myInterceptor"/>
</bean>

Note that the target object - "businessObjectTarget" in the example - must be a prototype. This allows the PoolingTargetSource implementation to create new instances of the target to grow the pool as necessary. See the Javadoc for AbstractPoolingTargetSource and the concrete subclass you wish to use for information about it's properties: maxSize is the most basic, and always guaranteed to be present.

In this case, "myInterceptor" is the name of an interceptor that would need to be defined in the same IoC context. However, it isn't necessary to specify interceptors to use pooling. If you want only pooling, and no other advice, don't set the interceptorNames property at all.

It's possible to configure Spring so as to be able to cast any pooled object to the org.springframework.aop.target.PoolingConfig interface, which exposes information about the configuration and current size of the pool through an introduction. You'll need to define an advisor like this:

<bean id="poolConfigAdvisor" class="org.springframework.beans.factory.config.MethodInvokingFactoryBean">
  <property name="targetObject" ref="poolTargetSource"/>
  <property name="targetMethod" value="getPoolingConfigMixin"/>
</bean>

This advisor is obtained by calling a convenience method on the AbstractPoolingTargetSource class, hence the use of MethodInvokingFactoryBean. This advisor's name ("poolConfigAdvisor" here) must be in the list of interceptors names in the ProxyFactoryBean exposing the pooled object.

The cast will look as follows:

PoolingConfig conf = (PoolingConfig) beanFactory.getBean("businessObject");
System.out.println("Max pool size is " + conf.getMaxSize());
[Note]Note

Pooling stateless service objects is not usually necessary. We don't believe it should be the default choice, as most stateless objects are naturally thread safe, and instance pooling is problematic if resources are cached.

Simpler pooling is available using autoproxying. It's possible to set the TargetSources used by any autoproxy creator.

7.10.3. Prototype target sources

Setting up a "prototype" target source is similar to a pooling TargetSource. In this case, a new instance of the target will be created on every method invocation. Although the cost of creating a new object isn't high in a modern JVM, the cost of wiring up the new object (satisfying its IoC dependencies) may be more expensive. Thus you shouldn't use this approach without very good reason.

To do this, you could modify the poolTargetSource definition shown above as follows. (I've also changed the name, for clarity.)

<bean id="prototypeTargetSource" class="org.springframework.aop.target.PrototypeTargetSource">
  <property name="targetBeanName" ref="businessObjectTarget"/>
</bean>

There's only one property: the name of the target bean. Inheritance is used in the TargetSource implementations to ensure consistent naming. As with the pooling target source, the target bean must be a prototype bean definition.

7.10.4. ThreadLocal target sources

ThreadLocal target sources are useful if you need an object to be created for each incoming request (per thread that is). The concept of a ThreadLocal provide a JDK-wide facility to transparently store resource alongside a thread. Setting up a ThreadLocalTargetSource is pretty much the same as was explained for the other types of target source:

<bean id="threadlocalTargetSource" class="org.springframework.aop.target.ThreadLocalTargetSource">
  <property name="targetBeanName" value="businessObjectTarget"/>
</bean>
[Note]Note

ThreadLocals come with serious issues (potentially resulting in memory leaks) when incorrectly using them in a multi-threaded and multi-classloader environments. One should always consider wrapping a threadlocal in some other class and never directly use the ThreadLocal itself (except of course in the wrapper class). Also, one should always remember to correctly set and unset (where the latter simply involved a call to ThreadLocal.set(null)) the resource local to the thread. Unsetting should be done in any case since not unsetting it might result in problematic behavior. Spring's ThreadLocal support does this for you and should always be considered in favor of using ThreadLocals without other proper handling code.

7.11. Defining new Advice types

Spring AOP is designed to be extensible. While the interception implementation strategy is presently used internally, it is possible to support arbitrary advice types in addition to the out-of-the-box interception around advice, before, throws advice and after returning advice.

The org.springframework.aop.framework.adapter package is an SPI package allowing support for new custom advice types to be added without changing the core framework. The only constraint on a custom Advice type is that it must implement the org.aopalliance.aop.Advice tag interface.

Please refer to the org.springframework.aop.framework.adapter package's Javadocs for further information.

7.12. Further resources

Please refer to the Spring sample applications for further examples of Spring AOP:

  • The JPetStore's default configuration illustrates the use of the TransactionProxyFactoryBean for declarative transaction management.

  • The /attributes directory of the JPetStore illustrates the use of attribute-driven declarative transaction management.

Chapter 8. Testing

8.1. Introduction

The Spring team considers developer testing to be an absolutely integral part of enterprise software development. A thorough treatment of testing in the enterprise is beyond the scope of this chapter; rather, the focus here is on the value add that the adoption of the IoC principle can bring to unit testing; and on the benefits that the Spring Framework provides in integration testing.

8.2. Unit testing

One of the main benefits of Dependency Injection is that your code should really depend far less on the container than in traditional J2EE development. The POJOs that comprise your application should be testable in JUnit tests, with objects simply instantiated using the new operator, without Spring or any other container. You can use mock objects (in conjunction with many other valuable testing techniques) to test your code in isolation. If you follow the architecture recommendations around Spring you will find that the resulting clean layering and componentization of your codebase will naturally faciliate easier unit testing. For example, you will be able to test service layer objects by stubbing or mocking DAO interfaces, without any need to access persistent data while running unit tests.

True unit tests typically will run extremely quickly, as there is no runtime infrastructure to set up, whether application server, database, ORM tool, or whatever. Thus emphasizing true unit tests as part of your development methodology will boost your productivity. The upshot of this is that you do not need this section of the testing chapter to help you write effective unit tests for your IoC-based applications.

8.3. Integration testing

However, it is also important to be able to perform some integration testing without requiring deployment to your application server or connecting to other enterprise infrastructure. This will enable you to test things such as:

  • The correct wiring of your Spring IoC container contexts.

  • Data access using JDBC or an ORM tool. This would include such things such as the correctness of SQL statements / or Hibernate XML mapping files.

The Spring Framework provides first class support for integration testing in the form of the classes that are packaged in the spring-mock.jar library. Please note that these test classes are JUnit-specific.

The org.springframework.test package provides valuable JUnit TestCase superclasses for integration testing using a Spring container, while at the same time not being reliant on an application server or other deployed environment. They will be slower to run than unit tests, but much faster to run than the equivalent Cactus tests or remote tests relying on deployment to an application server.

These superclasses provide the following functionality:

8.3.1. Context management and caching

The org.springframework.test package provides support for consistent loading of Spring contexts, and caching of loaded contexts. Support for the caching of loaded contexts is important, because if you are working on a large project, startup time may become an issue - not because of the overhead of Spring itself, but because the objects instantiated by the Spring container will themselves take time to instantiate. For example, a project with 50-100 Hibernate mapping files might take 10-20 seconds to load the mapping files, and incurring that cost before running every single test case in every single test fixture will lead to slower overall test runs that could reduce productivity.

To address this issue, the AbstractDependencyInjectionSpringContextTests has an protected method that subclasses must implement to provide the location of context definition files:

protected String[] getConfigLocations();

Implementations of this method must provide an array containing the resource locations of XML configuration metadata - typically on the classpath - used to configure the application. This will be the same, or nearly the same, as the list of configuration locations specified in web.xml or other deployment configuration.

By default, once loaded, the configuration fileset will be reused for each test case. Thus the setup cost will be incurred only once (per test fixture), and subsequent test execution will be much faster. In the unlikely case that a test may 'dirty' the config location, requiring reloading - for example, by changing a bean definition or the state of an application object - you can call the setDirty() method on AbstractDependencyInjectionSpringContextTests to cause the test fixture to reload the configurations and rebuild the application context before executing the next test case.

8.3.2. Dependency Injection of test fixtures

When AbstractDependencyInjectionSpringContextTests (and subclasses) load your application context, they can optionally configure instances of your test classes by Setter Injection. All you need to do is to define instance variables and the corresponding setters. AbstractDependencyInjectionSpringContextTests will automatically locate the corresponding object in the set of configuration files specified in the getConfigLocations() method.

Consider the scenario where we have a class, HibernateTitleDao, that performs data access logic for say, the Title domain object. We want to write integration tests that test all of the following areas:

  • The Spring configuration; basically, is everything related to the configuration of the HibernateTitleDao bean correct and present?

  • The Hibernate mapping file configuration; is everything mapped correctly and are the correct lazy-loading settings in place?

  • The logic of the HibernateTitleDao; does the configured instance of this class perform as anticipated?

Let's look at the test class itself (we will look at the configuration immediately afterwards).

public final class HibernateTitleDaoTests extends AbstractDependencyInjectionSpringContextTests  {

    // this instance will be (automatically) dependency injected    
    private HibernateTitleDao titleDao;

    // a setter method to enable DI of the 'titleDao' instance variable
    public void setTitleDao(HibernateTitleDao titleDao) {
        this.titleDao = titleDao;
    }

    public void testLoadTitle() throws Exception {
        Title title = this.titleDao.loadTitle(new Long(10));
        assertNotNull(title);
    }

    // specifies the Spring configuration to load for this test fixture
    protected String[] getConfigLocations() {
        return new String[] { "classpath:com/foo/daos.xml" };
    }

}

The file referenced by the getConfigLocations() method ('classpath:com/foo/daos.xml') looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE beans PUBLIC  "-//SPRING//DTD BEAN 2.0//EN"
    "http://www.springframework.org/dtd/http://www.springframework.org/dtd/spring-beans-2.0.dtd">
<beans>

    <!-- this bean will be injected into the HibernateTitleDaoTests class -->
    <bean id="titleDao" class="com.foo.dao.hibernate.HibernateTitleDao">
        <property name="sessionFactory" ref="sessionFactory"/>
    </bean>
    
    <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
        <!-- dependencies elided for clarity -->
    </bean>

</beans>

The AbstractDependencyInjectionSpringContextTests classes uses autowire by type. Thus if you have multiple bean definitions of the same type, you cannot rely on this approach for those particular beans. In that case, you can use the inherited applicationContext instance variable, and explicit lookup using (for example) an explicit call to applicationContext.getBean("titleDao").

If you don't want dependency injection applied to your test cases, simply don't declare any setters. Alternatively, you can extend the AbstractSpringContextTests - the root of the class hierarchy in the org.springframework.test package. It merely contains convenience methods to load Spring contexts, and performs no Dependency Injection of the test fixture.

8.3.2.1. Field level injection

If, for whatever reason, you don't fancy having setter methods in your test fixtures, Spring can (in this one case) inject dependencies into protected fields. Find below a reworking of the previous example to use field level injection (the Spring XML configuration does not need to change, merely the test fixture).

public final class HibernateTitleDaoTests extends AbstractDependencyInjectionSpringContextTests  {

    public HibernateTitleDaoTests() {
    	// switch on field level injection
        setPopulateProtectedVariables(true);
    }

    // this instance will be (automatically) dependency injected
    protected HibernateTitleDao titleDao;

    public void testLoadTitle() throws Exception {
        Title title = this.titleDao.loadTitle(new Long(10));
        assertNotNull(title);
    }

    // specifies the Spring configuration to load for this test fixture
    protected String[] getConfigLocations() {
        return new String[] { "classpath:com/foo/daos.xml" };
    }

}

In the case of field injection, there is no autowiring going on: the name of your protected instances variable(s) are used as the lookup bean name in the configured Spring container.

8.3.3. Transaction management

One common issue in tests that access a real database is their affect on the state of the persistence store. Even when you're using a development database, changes to the state may affect future tests. Also, many operations - such as inserting to or modifying persistent data - cannot be done (or verified) outside a transaction.

The org.springframework.test.AbstractTransactionalDataSourceSpringContextTests superclass (and subclasses) exist to meet this need. By default, they create and roll back a transaction for each test. You simply write code that can assume the existence of a transaction. If you call transactionally proxied objects in your tests, they will behave correctly, according to their transactional semantics.

AbstractTransactionalSpringContextTests depends on a PlatformTransactionManager bean being defined in the application context. The name doesn't matter, due to the use of autowire by type.

Typically you will extend the subclass, AbstractTransactionalDataSourceSpringContextTests. This also requires that a DataSource bean definition - again, with any name - be present in the configurations. It creates a JdbcTemplate instance variable that is useful for convenient querying, and provides handy methods to delete the contents of selected tables (remember that the transaction will roll back by default, so this is safe to do).

If you want a transaction to commit - unusual, but occasionally useful when you want a particular test to populate the database - you can call the setComplete() method inherited from AbstractTransactionalSpringContextTests. This will cause the transaction to commit instead of roll back.

There is also convenient ability to end a transaction before the test case ends, through calling the endTransaction() method. This will roll back the transaction by default, and commit it only if setComplete() had previously been called. This functionality is useful if you want to test the behavior of 'disconnected' data objects, such as Hibernate-mapped objects that will be used in a web or remoting tier outside a transaction. Often, lazy loading errors are discovered only through UI testing; if you call endTransaction() you can ensure correct operation of the UI through your JUnit test suite.

8.3.4. Convenience variables

When you extend the AbstractTransactionalDataSourceSpringContextTests class you will have access to the following protected instance variables:

  • applicationContext (a ConfigurableApplicationContext): inherited from the AbstractDependencyInjectionSpringContextTests superclass. Use this to perfom explicit bean lookup, or test the state of the context as a whole.

  • jdbcTemplate: inherited from AbstractTransactionalDataSourceSpringContextTests. Useful for querying to confirm state. For example, you might query before and after testing application code that creates an object and persists it using an ORM tool, to verify that the data appears in the database. (Spring will ensure that the query runs in the scope of the same transaction.) You will need to tell your ORM tool to 'flush' its changes for this to work correctly, for example using the flush() method on Hibernate's Session interface.

Often you will provide an application-wide superclass for integration tests that provides further useful instance variables used in many tests.

8.3.5. Java5+ specific support

If you are developing against Java5 or greater, there are some additional annotations and support classes that you can use in your testing. The AbstractAnnotationAwareTransactionalTests class extends the AbstractTransactionalDataSourceSpringContextTests makes the text fixtures that you write that inherit from it aware of a number of (Spring-specific) annotations.

8.3.5.1. Annotations

The Spring Framework provides a number of annotations to help when writing integration tests. Please note that these annotations must be used in conjunction with the aforementioned AbstractAnnotationAwareTransactionalTests in order for the presence of these annotations to have any effect.

  • @DirtiesContext.

    The presence of this annotation on a text method indicates that the underlying Spring container is 'dirtied' during the execution of of the test method, and thus must be rebuilt after the test method finishes execution (regardless of whether the test passed or not). Has the same effect as a regular setDirty() invocation.

    @DirtiesContext
    public void testProcess() {
    	// some logic that results in the Spring container being dirtied
    }
  • @ExpectedException.

    Indicates that the annotated test method is expected to throw an exception during execution. The type of the expected exception is provided in the annotation, and if an an instance of the exception is thrown during the test method execution then the test passes. Likewise if an instance of the exception is not thrown during the test method execution then the test fails.

    @ExpectedException(SomeBusinessException.class)
    public void testProcessRainyDayScenario() {
    	// some logic that results in an Exception being thrown
    }
  • @NotTransactional.

    Simply indicates that the annotated test method must not execute in a transactional context.

    @NotTransactional
    public void testProcess() {
    	// ...
    }
  • @Repeat

    Indicates that the annotated test method must be executed repeatedly. The number of times that the test method is to be executed is specified in the annotation.

    @Repeat(10)
    public void testProcessRepeatedly() {
    	// ...
    }

8.3.6. PetClinic example

The PetClinic sample application included with the Spring distribution illustrates the use of these test superclasses. Most test functionality is included in the AbstractClinicTests, for which a partial listing is shown below:

public abstract class AbstractClinicTests
               extends AbstractTransactionalDataSourceSpringContextTests {

   protected Clinic clinic;

   public void setClinic(Clinic clinic) {
      this.clinic = clinic;
   }

   public void testGetVets() {
      Collection vets = this.clinic.getVets();
      assertEquals('JDBC query must show the same number of vets',
         jdbcTemplate.queryForInt('SELECT COUNT(0) FROM VETS'), 
         vets.size());
      Vet v1 = (Vet) EntityUtils.getById(vets, Vet.class, 2);
      assertEquals('Leary', v1.getLastName());
      assertEquals(1, v1.getNrOfSpecialties());
      assertEquals('radiology', ((Specialty) v1.getSpecialties().get(0)).getName());
      Vet v2 = (Vet) EntityUtils.getById(vets, Vet.class, 3);
      assertEquals('Douglas', v2.getLastName());
      assertEquals(2, v2.getNrOfSpecialties());
      assertEquals('dentistry', ((Specialty) v2.getSpecialties().get(0)).getName());
      assertEquals('surgery', ((Specialty) v2.getSpecialties().get(1)).getName());
}

Notes:

  • This test case extends the AbstractTransactionalDataSourceSpringContextTests class, from which it inherits Dependency Injection and transactional behavior.

  • The clinic instance variable - the application object being tested - is set by Dependency Injection through the setClinic(..) method.

  • The testGetVets() method illustrates how the inherited JdbcTemplate variable can be used to verify correct behavior of the application code being tested. This allows for stronger tests, and lessens dependency on the exact test data. For example, you can add additional rows in the database without breaking tests.

  • Like many integration tests using a database, most of the tests in AbstractClinicTests depend on a minimum amount of data already in the database before the test cases run. You might, however, choose to populate the database in your test cases also - again, within the one transaction.

The PetClinic application supports four data access technologies - JDBC, Hibernate, TopLink, and JPA. Thus the AbstractClinicTests class does not itself specify the context locations - this is deferred to subclasses, that implement the necessary protected abstract method from AbstractDependencyInjectionSpringContextTests.

For example, the Hibernate implementation of the PetClinic tests contains the following implementation:

public final class HibernateClinicTests extends AbstractClinicTests {

   protected String[] getConfigLocations() {
      return new String[] { 
         "/org/springframework/samples/petclinic/hibernate/applicationContext-hibernate.xml" 
      };
   }
}

As the PetClinic is a very simple application, there is only one Spring configuration file. Of course, more complex applications will typically break their Spring configuration across multiple files. Instead of being defined in a leaf class, config locations will often be specified in a common base class for all application-specific integration tests. This may also add useful instance variables - populated by Dependency Injection, naturally - such as a HibernateTemplate, in the case of an application using Hibernate.

As far as possible, you should have exactly the same Spring configuration files in your integration tests as in the deployed environment. One likely point of difference concerns database connection pooling and transaction infrastructure. If you are deploying to a full-blown application server, you will probably use its connection pool (available through JNDI) and JTA implementation. Thus in production you will use a JndiObjectFactoryBean for the DataSource, and JtaTransactionManager. JNDI and JTA will not be available in out-of-container integration tests, so you should use a combination like the Commons DBCP BasicDataSource and DataSourceTransactionManager or HibernateTransactionManager for them. You can factor out this variant behavior into a single XML file, having the choice between application server and 'local' configuration separated from all other configuration, which will not vary between the test and production environments.

8.4. Further Resources

This section contains links to further resources about testing in general.

Part II. Middle Tier Data Access

This part of the reference documentation is concerned with the middle tier, and specifically the data access responsibilities of said tier.

Spring's comprehensive transaction management support is covered in some detail, followed by thorough coverage of the various middle tier data access frameworks and technologies that the Spring Framework integrates with.

Chapter 9. Transaction management

9.1. Introduction

One of the most compelling reasons to use the Spring Framework is the comprehensive transaction support. The Spring Framework provides a consistent abstraction for transaction management that delivers the following benefits:

  • Provides a consistent programming model across different transaction APIs such as JTA, JDBC, Hibernate, JPA, and JDO.

  • Supports declarative transaction management.

  • Provides a simpler API for programmatic transaction management than a number of complex transaction APIs such as JTA.

  • Integrates very well with Spring's various data access abstractions.

This chapter is divided up into a number of sections, each detailing one of the value-adds or technologies of the Spring Framework's transaction support. The chapter closes up with some discussion of best practices surrounding transaction management (for example, choosing between declarative and programmatic transaction management).

  • The first section, entitled Motivations, describes why one would want to use the Spring Framework's transaction abstraction as opposed to EJB CMT or driving transactions via a proprietary API such as Hibernate.

  • The second section, entitled Key abstractions outlines the core classes in the Spring Framework's transaction support, as well as how to configure and obtain DataSource instances from a variety of sources.

  • The third section, entitled Declarative transaction management, covers the Spring Framework's support for declarative transaction management.

  • The fourth section, entitled Programmatic transaction management, covers the Spring Framework's support for programmatic (that is, explicitly coded) transaction management.

9.2. Motivations

Traditionally, J2EE developers have had two choices for transaction management: global or local transactions. Global transactions are managed by the application server, using the Java Transaction API (JTA). Local transactions are resource-specific: the most common example would be a transaction associated with a JDBC connection. This choice has profound implications. For instance, global transactions provide the ability to work with multiple transactional resources (typically relational databases and message queues). With local transactions, the application server is not involved in transaction management and cannot help ensure correctness across multiple resources. (It is worth noting that most applications use a single transaction resource.)

Global Transactions. Global transactions have a significant downside, in that code needs to use JTA, and JTA is a cumbersome API to use (partly due to its exception model). Furthermore, a JTA UserTransaction normally needs to be sourced from JNDI: meaning that we need to use both JNDI and JTA to use JTA. Obviously all use of global transactions limits the reusability of application code, as JTA is normally only available in an application server environment. Previously, the preferred way to use global transactions was via EJB CMT (Container Managed Transaction): CMT is a form of declarative transaction management (as distinguished from programmatic transaction management). EJB CMT removes the need for transaction-related JNDI lookups - although of course the use of EJB itself necessitates the use of JNDI. It removes most of the need (although not entirely) to write Java code to control transactions. The significant downside is that CMT is tied to JTA and an application server environment. Also, it is only available if one chooses to implement business logic in EJBs, or at least behind a transactional EJB facade. The negatives around EJB in general are so great that this is not an attractive proposition, especially in the face of compelling alternatives for declarative transaction management.

Local Transactions. Local transactions may be easier to use, but have significant disadvantages: they cannot work across multiple transactional resources. For example, code that manages transactions using a JDBC connection cannot run within a global JTA transaction. Another downside is that local transactions tend to be invasive to the programming model.

Spring resolves these problems. It enables application developers to use a consistent programming model in any environment. You write your code once, and it can benefit from different transaction management strategies in different environments. The Spring Framework provides both declarative and programmatic transaction management. Declarative transaction management is preferred by most users, and is recommended in most cases.

With programmatic transaction management, developers work with the Spring Framework transaction abstraction, which can run over any underlying transaction infrastructure. With the preferred declarative model, developers typically write little or no code related to transaction management, and hence don't depend on the Spring Framework's transaction API (or indeed on any other transaction API).

9.3. Key abstractions

The key to the Spring transaction abstraction is the notion of a transaction strategy. A transaction strategy is defined by the org.springframework.transaction.PlatformTransactionManager interface, shown below:

public interface PlatformTransactionManager {

    TransactionStatus getTransaction(TransactionDefinition definition)
        throws TransactionException;

    void commit(TransactionStatus status) throws TransactionException;

    void rollback(TransactionStatus status) throws TransactionException;
}

This is primarily an SPI interface, although it can be used programmatically. Note that in keeping with the Spring Framework's philosophy, PlatformTransactionManager is an interface, and can thus be easily mocked or stubbed as necessary. Nor is it tied to a lookup strategy such as JNDI: PlatformTransactionManager implementations are defined like any other object (or bean) in the Spring Framework's IoC container. This benefit alone makes it a worthwhile abstraction even when working with JTA: transactional code can be tested much more easily than if it used JTA directly.

Again in keeping with Spring's philosophy, the TransactionException that can be thrown by any of the PlatformTransactionManager interface's methods is unchecked (i.e. it extends the java.lang.RuntimeException class). Transaction infrastructure failures are almost invariably fatal. In rare cases where application code can actually recover from a transaction failure, the application developer can still choose to catch and handle TransactionException. The salient point is that developers are not forced to do so.

The getTransaction(..) method returns a TransactionStatus object, depending on a TransactionDefinition parameter. The returned TransactionStatus might represent a new or existing transaction (if there were a matching transaction in the current call stack - with the implication being that (as with J2EE transaction contexts) a TransactionStatus is associated with a thread of execution).

The TransactionDefinition interface specifies:

  • Isolation: the degree of isolation this transaction has from the work of other transactions. For example, can this transaction see uncommitted writes from other transactions?

  • Propagation: normally all code executed within a transaction scope will run in that transaction. However, there are several options specifying behavior if a transactional method is executed when a transaction context already exists: for example, simply continue running in the existing transaction (the common case); or suspending the existing transaction and creating a new transaction. Spring offers all of the transaction propagation options familiar from EJB CMT.

  • Timeout: how long this transaction may run before timing out (and automatically being rolled back by the underlying transaction infrastructure).

  • Read-only status: a read-only transaction does not modify any data. Read-only transactions can be a useful optimization in some cases (such as when using Hibernate).

These settings reflect standard transactional concepts. If necessary, please refer to a resource discussing transaction isolation levels and other core transaction concepts because understanding such core concepts is essential to using the Spring Framework or indeed any other transaction management solution.

The TransactionStatus interface provides a simple way for transactional code to control transaction execution and query transaction status. The concepts should be familiar, as they are common to all transaction APIs:

public interface TransactionStatus {

    boolean isNewTransaction();

    void setRollbackOnly();

    boolean isRollbackOnly();
}

Regardless of whether you opt for declarative or programmatic transaction management in Spring, defining the correct PlatformTransactionManager implementation is absolutely essential. In good Spring fashion, this important definition typically is made using via Dependency Injection.

PlatformTransactionManager implementations normally require knowledge of the environment in which they work: JDBC, JTA, Hibernate, etc The following examples from the dataAccessContext-local.xml file from Spring's jPetStore sample application show how a local PlatformTransactionManager implementation can be defined. (This will work with plain JDBC.)

We must define a JDBC DataSource, and then use the Spring DataSourceTransactionManager, giving it a reference to the DataSource.

<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
  <property name="driverClassName" value="${jdbc.driverClassName}" />
  <property name="url" value="${jdbc.url}" />
  <property name="username" value="${jdbc.username}" />
  <property name="password" value="${jdbc.password}" />
</bean>

The related PlatformTransactionManager bean definition will look like this:

<bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
  <property name="dataSource" ref="dataSource"/>
</bean>

If we use JTA in a J2EE container, as in the 'dataAccessContext-jta.xml' file from the same sample application, we use a container DataSource, obtained via JNDI, in conjunction with Spring's JtaTransactionManager. The JtaTransactionManager doesn't need to know about the DataSource, or any other specific resources, as it will use the container's global transaction management infrastructure.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:jee="http://www.springframework.org/schema/jee"
xsi:schemaLocation="
       http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
       http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-2.0.xsd">

  <jee:jndi-lookup id="dataSource" jndi-name="jdbc/jpetstore"/> 

  <bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager" />
  
  <!-- other <bean/> definitions here -->

</beans>
[Note]Note

The above definition of the 'dataSource' bean uses the <jndi-lookup/> tag from the 'jee' namespace. For more information on schema-based configuration, see Appendix A, XML Schema-based configuration, and for more information on the <jee/> tags see the section entitled Section A.2.3, “The jee schema”.

We can also use Hibernate local transactions easily, as shown in the following examples from the Spring Framework's PetClinic sample application. In this case, we need to define a Hibernate LocalSessionFactoryBean, which application code will use to obtain Hibernate Session instances.

The DataSource bean definition will be similar to the one shown previously (and thus is not shown). If the DataSource is managed by the JEE container it should be non-transactional as the Spring Framework, rather than the JEE container, will manage transactions.

The 'txManager' bean in this case is of the HibernateTransactionManager type. In the same way as the DataSourceTransactionManager needs a reference to the DataSource, the HibernateTransactionManager needs a reference to the SessionFactory.

<bean id="sessionFactory" class="org.springframework.orm.hibernate.LocalSessionFactoryBean">
  <property name="dataSource" ref="dataSource" />
  <property name="mappingResources">
    <list>
      <value>org/springframework/samples/petclinic/hibernate/petclinic.hbm.xml</value>
    </list>
  </property>
  <property name="hibernateProperties">
    <value>
	  hibernate.dialect=${hibernate.dialect}
	</value>
  </property>
</bean>

<bean id="txManager" class="org.springframework.orm.hibernate.HibernateTransactionManager">
  <property name="sessionFactory" ref="sessionFactory" />
</bean>

With Hibernate and JTA transactions, we can simply use the JtaTransactionManager as with JDBC or any other resource strategy.

<bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager"/>

Note that this is identical to JTA configuration for any resource, as these are global transactions, which can enlist any transactional resource.

In all these cases, application code will not need to change at all. We can change how transactions are managed merely by changing configuration, even if that change means moving from local to global transactions or vice versa.

9.4. Resource synchronization with transactions

It should now be clear how different transaction managers are created, and how they are linked to related resources which need to be synchronized to transactions (i.e. DataSourceTransactionManager to a JDBC DataSource, HibernateTransactionManager to a Hibernate SessionFactory, etc). There remains the question however of how the application code, directly or indirectly using a persistence API (JDBC, Hibernate, JDO, etc), ensures that these resources are obtained and handled properly in terms of proper creation/reuse/cleanup and trigger (optionally) transaction synchronization via the relevant PlatformTransactionManager.

9.4.1. High-level approach

The preferred approach is to use Spring's highest level persistence integration APIs. These do not replace the native APIs, but internally handle resource creation/reuse, cleanup, optional transaction synchronization of the resources and exception mapping so that user data access code doesn't have to worry about these concerns at all, but can concentrate purely on non-boilerplate persistence logic. Generally, the same template approach is used for all persistence APIs, with examples including the JdbcTemplate, HibernateTemplate, and JdoTemplate classes (detailed in subsequent chapters of this reference documentation.

9.4.2. Low-level approach

At a lower level exist classes such as DataSourceUtils (for JDBC), SessionFactoryUtils (for Hibernate), PersistenceManagerFactoryUtils (for JDO), and so on. When it is preferable for application code to deal directly with the resource types of the native persistence APIs, these classes ensure that proper Spring Framework-managed instances are obtained, transactions are (optionally) synchronized, and exceptions which happen in the process are properly mapped to a consistent API.

For example, in the case of JDBC, instead of the traditional JDBC approach of calling the getConnection() method on the DataSource, you would instead use Spring's org.springframework.jdbc.datasource.DataSourceUtils class as follows:

Connection conn = DataSourceUtils.getConnection(dataSource);

If an existing transaction exists, and already has a connection synchronized (linked) to it, that instance will be returned. Otherwise, the method call will trigger the creation of a new connection, which will be (optionally) synchronized to any existing transaction, and made available for subsequent reuse in that same transaction. As mentioned, this has the added advantage that any SQLException will be wrapped in a Spring Framework CannotGetJdbcConnectionException - one of the Spring Framework's hierarchy of unchecked DataAccessExceptions. This gives you more information than can easily be obtained from the SQLException, and ensures portability across databases: even across different persistence technologies.

It should be noted that this will also work fine without Spring transaction management (transaction synchronization is optional), so you can use it whether or not you are using Spring for transaction management.

Of course, once you've used Spring's JDBC support or Hibernate support, you will generally prefer not to use DataSourceUtils or the other helper classes, because you'll be much happier working via the Spring abstraction than directly with the relevant APIs. For example, if you use the Spring JdbcTemplate or jdbc.object package to simplify your use of JDBC, correct connection retrieval happens behind the scenes and you won't need to write any special code.

9.4.3. TransactionAwareDataSourceProxy

At the very lowest level exists the TransactionAwareDataSourceProxy class. This is a proxy for a target DataSource, which wraps the target DataSource to add awareness of Spring-managed transactions. In this respect, it is similar to a transactional JNDI DataSource as provided by a J2EE server.

It should almost never be necessary or desirable to use this class, except when existing code exists which must be called and passed a standard JDBC DataSource interface implementation. In that case, it's possible to still have this code be usable, but participating in Spring managed transactions. It is preferable to write your new code using the higher level abstractions mentioned above.

9.5. Declarative transaction management

Most users of the Spring Framework choose declarative transaction management. It is the option with the least impact on application code, and hence is most consistent with the ideals of a non-invasive lightweight container.

The Spring Framework's declarative transaction management is made possible with Spring AOP, although, as the transactional aspects code comes with the Spring Framework distribution and may be used in a boilerplate fashion, AOP concepts do not generally have to be understood to make effective use of this code.

It may be helpful to begin by considering EJB CMT and explaining the similarities and differences with the Spring Framework's declarative transaction management. The basic approach is similar: it is possible to specify transaction behavior (or lack of it) down to individual method level. It is possible to make a setRollbackOnly() call within a transaction context if necessary. The differences are:

  • Unlike EJB CMT, which is tied to JTA, the Spring Framework's declarative transaction management works in any environment. It can work with JDBC, JDO, Hibernate or other transactions under the covers, with configuration changes only.

  • The Spring Framework enables declarative transaction management to be applied to any class, not merely special classes such as EJBs.

  • The Spring Framework offers declarative rollback rules: a feature with no EJB equivalent, which we'll discuss below. Rollback can be controlled declaratively, not merely programmatically.

  • The Spring Framework gives you an opportunity to customize transactional behavior, using AOP. For example, if you want to insert custom behavior in the case of transaction rollback, you can. You can also add arbitrary advice, along with the transactional advice. With EJB CMT, you have no way to influence the container's transaction management other than setRollbackOnly().

  • The Spring Framework does not support propagation of transaction contexts across remote calls, as do high-end application servers. If you need this feature, we recommend that you use EJB. However, consider carefully before using such a feature. Normally, we do not want transactions to span remote calls.

The concept of rollback rules is important: they enable us to specify which exceptions (and throwables) should cause automatic roll back. We specify this declaratively, in configuration, not in Java code. So, while we can still call setRollbackOnly()on the TransactionStatus object to roll the current transaction back programmatically, most often we can specify a rule that MyApplicationException must always result in rollback. This has the significant advantage that business objects don't need to depend on the transaction infrastructure. For example, they typically don't need to import any Spring APIs, transaction or other.

While the EJB default behavior is for the EJB container to automatically roll back the transaction on a system exception (usually a runtime exception), EJB CMT does not roll back the transaction automatically on an application exception (i.e. a checked exception other than java.rmi.RemoteException). While the Spring default behavior for declarative transaction management follows EJB convention (roll back is automatic only on unchecked exceptions), it is often useful to customize this.

9.5.1. Understanding the Spring Framework's declarative transaction implementation

The aim of this section is to dispel the mystique that is sometimes associated with the use of declarative transactions. It is all very well for this reference documentation simply to tell you to annotate your classes with the @Transactional annotation, add the line ('<tx:annotation-driven/>') to your configuration, and then expect you to understand how it all works. This section will explain the inner workings of the Spring Framework's declarative transaction infrastructure to help you navigate your way back upstream to calmer waters in the event of transaction-related issues.

[Tip]Tip

Looking at the Spring Framework source code is a good way to get a real understanding of the transaction support. We also suggest turning the logging level to 'DEBUG' during development to better see what goes on under the hood.

The most important concepts to grasp with regard to the Spring Framework's declarative transaction support are that this support is enabled via AOP proxies, and that the transactional advice is driven by metadata (currently XML- or annotation-based). The combination of AOP with transactional metadata yields an AOP proxy that uses a TransactionInterceptor in conjunction with an appropriate PlatformTransactionManager implementation to drive transactions around method invocations.

[Note]Note

Although knowledge of Spring AOP is not required to use Spring's declarative transaction support, it can help. Spring AOP is thoroughly covered in the chapter entitled Chapter 6, Aspect Oriented Programming with Spring.

Conceptually, calling a method on a transactional proxy looks like this...

9.5.2. A first example

Consider the following interface, and its attendant implementation. (The intent is to convey the concepts, and using the rote Foo and Bar tropes means that you can concentrate on the transaction usage and not have to worry about the domain model.)

// the service interface that we want to make transactional
            
package x.y.service;
            
public interface FooService {

    Foo getFoo(String fooName);

    Foo getFoo(String fooName, String barName);

    void insertFoo(Foo foo);

    void updateFoo(Foo foo);

}
// an implementation of the above interface
            
package x.y.service;

public class DefaultFooService implements FooService {

    public Foo getFoo(String fooName) {
        throw new UnsupportedOperationException();
    }

    public Foo getFoo(String fooName, String barName) {
        throw new UnsupportedOperationException();
    }

    public void insertFoo(Foo foo) {
        throw new UnsupportedOperationException();
    }

    public void updateFoo(Foo foo) {
        throw new UnsupportedOperationException();
    }

}

(For the purposes of this example, the fact that the DefaultFooService class throws UnsupportedOperationException instances in the body of each implemented method is good; it will allow us to see transactions being created and then rolled back in response to the UnsupportedOperationException instance being thrown.)

Let's assume that the first two methods of the FooService interface (getFoo(String) and getFoo(String, String)) have to execute in the context of a transaction with read-only semantics, and that the other methods (insertFoo(Foo) and updateFoo(Foo)) have to execute in the context of a transaction with read-write semantics. Don't worry about taking the following configuration in all at once; everything will be explained in detail in the next few paragraphs.

<!-- from the file 'context.xml' -->
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:aop="http://www.springframework.org/schema/aop"
       xmlns:tx="http://www.springframework.org/schema/tx"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
       http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd
       http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd">
  
  <!-- this is the service object that we want to make transactional -->
  <bean id="fooService" class="x.y.service.DefaultFooService"/>

  <!-- the transactional advice (i.e. what 'happens'; see the <aop:advisor/> bean below) -->
  <tx:advice id="txAdvice" transaction-manager="txManager">
    <!-- the transactional semantics... -->
    <tx:attributes>
      <!-- all methods starting with 'get' are read-only -->
      <tx:method name="get*" read-only="true"/>
      <!-- other methods use the default transaction settings (see below) -->
      <tx:method name="*"/>
    </tx:attributes>
  </tx:advice>
  
  <!-- ensure that the above transactional advice runs for any execution
      of an operation defined by the FooService interface -->
  <aop:config>
    <aop:pointcut id="fooServiceOperation" expression="execution(* x.y.service.FooService.*(..))"/>
    <aop:advisor advice-ref="txAdvice" pointcut-ref="fooServiceOperation"/>
  </aop:config>
  
  <!-- don't forget the DataSource -->
  <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
    <property name="driverClassName" value="oracle.jdbc.driver.OracleDriver"/>
    <property name="url" value="jdbc:oracle:thin:@rj-t42:1521:elvis"/>
    <property name="username" value="scott"/>
    <property name="password" value="tiger"/>
  </bean>

  <!-- similarly, don't forget the PlatformTransactionManager -->
  <bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
    <property name="dataSource" ref="dataSource"/>
  </bean>
  
  <!-- other <bean/> definitions here -->

</beans>

Let's pick apart the above configuration. We have a service object (the 'fooService' bean) that we want to make transactional. The transaction semantics that we want to apply are encapsulated in the <tx:advice/> definition. The <tx:advice/> definition reads as “... all methods on starting with 'get' are to execute in the context of a read-only transaction, and all other methods are to execute with the default transaction semantics”. The 'transaction-manager' attribute of the <tx:advice/> tag is set to the name of the PlatformTransactionManager bean that is going to actually drive the transactions (in this case the 'txManager' bean).

[Tip]Tip

You can actually omit the 'transaction-manager' attribute in the transactional advice (<tx:advice/>) if the bean name of the PlatformTransactionManager that you want to wire in has the name 'transactionManager'. If the PlatformTransactionManager bean that you want to wire in has any other name, then you have to be explicit and use the 'transaction-manager' attribute as in the example above.

The <aop:config/> definition ensures that the transactional advice defined by the 'txAdvice' bean actually executes at the appropriate points in the program. First we define a pointcut that matches the execution of any operation defined in the FooService interface ('fooServiceOperation'). Then we associate the pointcut with the 'txAdvice' using an advisor. The result indicates that at the execution of a 'fooServiceOperation', the advice defined by 'txAdvice' will be run.

The expression defined within the <aop:pointcut/> element is an AspectJ pointcut expression; see the chapter entitled Chapter 6, Aspect Oriented Programming with Spring for more details on pointcut expressions in Spring 2.0.

A common requirement is to make an entire service layer transactional. The best way to do this is simply to change the pointcut expression to match any operation in your service layer. For example:

<aop:config>
    <aop:pointcut id="fooServiceMethods" expression="execution(* x.y.service.*.*(..))"/>
    <aop:advisor advice-ref="txAdvice" pointcut-ref="fooServiceMethods"/>
  </aop:config>

(This example assumes that all your service interfaces are defined in the 'x.y.service' package; see the chapter entitled Chapter 6, Aspect Oriented Programming with Spring for more details.)

Now that we've analyzed the configuration, you may be asking yourself, “Okay... but what does all this configuration actually do?”.

The above configuration is going to effect the creation of a transactional proxy around the object that is created from the 'fooService' bean definition. The proxy will be configured with the transactional advice, so that when an appropriate method is invoked on the proxy, a transaction may be started, suspended, be marked as read-only, etc., depending on the transaction configuration associated with that method. Consider the following program that test drives the above configuration.

public final class Boot {

    public static void main(final String[] args) throws Exception {
        ApplicationContext ctx = new ClassPathXmlApplicationContext("context.xml", Boot.class);
        FooService fooService = (FooService) ctx.getBean("fooService");
        fooService.insertFoo (new Foo());
    }
}

The output from running the above program will look something like this. (Please note that the Log4J output and the stacktrace from the UnsupportedOperationException thrown by the insertFoo(..) method of the DefaultFooService class have been truncated in the interest of clarity.)

    <!-- the Spring container is starting up... -->
[AspectJInvocationContextExposingAdvisorAutoProxyCreator] - Creating implicit proxy
        for bean 'fooService' with 0 common interceptors and 1 specific interceptors
    <!-- the DefaultFooService is actually proxied -->
[JdkDynamicAopProxy] - Creating JDK dynamic proxy for [x.y.service.DefaultFooService]

    <!-- ... the insertFoo(..) method is now being invoked on the proxy -->

[TransactionInterceptor] - Getting transaction for x.y.service.FooService.insertFoo
    <!-- the transactional advice kicks in here... -->
[DataSourceTransactionManager] - Creating new transaction with name [x.y.service.FooService.insertFoo]
[DataSourceTransactionManager] - Acquired Connection
        [org.apache.commons.dbcp.PoolableConnection@a53de4] for JDBC transaction

    <!-- the insertFoo(..) method from DefaultFooService throws an exception... -->
[RuleBasedTransactionAttribute] - Applying rules to determine whether transaction should
        rollback on java.lang.UnsupportedOperationException
[TransactionInterceptor] - Invoking rollback for transaction on x.y.service.FooService.insertFoo
        due to throwable [java.lang.UnsupportedOperationException]

   <!-- and the transaction is rolled back (by default, RuntimeException instances cause rollback) -->
[DataSourceTransactionManager] - Rolling back JDBC transaction on Connection
        [org.apache.commons.dbcp.PoolableConnection@a53de4]
[DataSourceTransactionManager] - Releasing JDBC Connection after transaction
[DataSourceUtils] - Returning JDBC Connection to DataSource

Exception in thread "main" java.lang.UnsupportedOperationException
	at x.y.service.DefaultFooService.insertFoo(DefaultFooService.java:14)
   <!-- AOP infrastructure stack trace elements removed for clarity -->
	at $Proxy0.insertFoo(Unknown Source)
	at Boot.main(Boot.java:11)

9.5.3. Rolling back

The previous section outlined the basics of how to specify the transactional settings for the classes, typically service layer classes, in your application in a declarative fashion. This section describes how you can control the rollback of transactions in a simple declarative fashion.

The recommended way to indicate to the Spring Framework's transaction infrastructure that a transaction's work is to be rolled back is to throw an Exception from code that is currently executing in the context of a transaction. The Spring Framework's transaction infrastructure code will catch any unhandled Exception as it bubbles up the call stack, and will mark the transaction for rollback.

However, please note that the Spring Framework's transaction infrastructure code will, by default, only mark a transaction for rollback in the case of runtime, unchecked exceptions; that is, when the thrown exception is an instance or subclass of RuntimeException. (Errors will also - by default - result in a rollback.) Checked exceptions that are thrown from a transactional method will not result in the transaction being rolled back.

Exactly which Exception types mark a transaction for rollback can be configured. Find below a snippet of XML configuration that demonstrates how one would configure rollback for a checked, application-specific Exception type.

<tx:advice id="txAdvice" transaction-manager="txManager">
  <tx:attributes>
	 <tx:method name="get*" read-only="false" rollback-for="NoProductInStockException"/>
	 <tx:method name="*"/>
  </tx:attributes>
</tx:advice>

The second way to indicate to the transaction infrastructure that a rollback is required is to do so programmatically. Although very simple, this way is quite invasive, and tightly couples your code to the Spring Framework's transaction infrastructure. Find below a snippet of code that does programmatic rollback of a Spring Framework-managed transaction:

public void resolvePosition() {
    try {
        // some business logic...
    } catch (NoProductInStockException ex) {
        // trigger rollback programmatically
        TransactionAspectSupport.currentTransactionStatus().setRollbackOnly();
    }
}

You are strongly encouraged to use the declarative approach to rollback if at all possible. Programmatic rollback is available should you need it, but its usage flies in the face of achieving a clean POJO-based application model.

9.5.4. Configuring different transactional semantics for different beans

Consider the scenario where you have a number of service layer objects, and you want to apply totally different transactional configuration to each of them. This is achieved by defining distinct <aop:advisor/> elements with differing 'pointcut' and 'advice-ref' attribute values.

Let's assume that all of your service layer classes are defined in a root 'x.y.service' package. To make all beans that are instances of classes defined in that package (or in subpackages) and that have names ending in 'Service' have the default transactional configuration, you would write the following:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:aop="http://www.springframework.org/schema/aop"
    xmlns:tx="http://www.springframework.org/schema/tx"
    xsi:schemaLocation="
    http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
    http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd
    http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd">

    <aop:config>

        <aop:pointcut id="serviceOperation"
                    expression="execution(* x.y.service..*Service.*(..))"/>

        <aop:advisor pointcut-ref="serviceOperation" advice-ref="txAdvice"/>

    </aop:config>

    <!-- these two beans will be transactional... -->
    <bean id="fooService" class="x.y.service.DefaultFooService"/>
    <bean id="barService" class="x.y.service.extras.SimpleBarService"/>

    <!-- ... and these two beans won't -->
    <bean id="anotherService" class="org.xyz.SomeService"/> <!-- (not in the right package) -->
    <bean id="barManager" class="x.y.service.SimpleBarManager"/> <!-- (doesn't end in 'Service') -->

    <tx:advice id="txAdvice">
        <tx:attributes>
            <tx:method name="get*" read-only="true"/>
            <tx:method name="*"/>
        </tx:attributes>
    </tx:advice>

    <!-- other transaction infrastructure beans such as a PlatformTransactionManager omitted... -->

</beans>

Find below an example of configuring two distinct beans with totally different transactional settings.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:aop="http://www.springframework.org/schema/aop"
    xmlns:tx="http://www.springframework.org/schema/tx"
    xsi:schemaLocation="
    http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
    http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd
    http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd">

    <aop:config>

        <aop:pointcut id="defaultServiceOperation"
                    expression="execution(* x.y.service.*Service.*(..))"/>

        <aop:pointcut id="noTxServiceOperation"
                    expression="execution(* x.y.service.ddl.DefaultDdlManager.*(..))"/>

        <aop:advisor pointcut-ref="defaultServiceOperation" advice-ref="defaultTxAdvice"/>

        <aop:advisor pointcut-ref="noTxServiceOperation" advice-ref="noTxAdvice"/>

    </aop:config>

    <!-- this bean will be transactional (see the 'defaultServiceOperation' pointcut) -->
    <bean id="fooService" class="x.y.service.DefaultFooService"/>

    <!-- this bean will also be transactional, but with totally different transactional settings -->
    <bean id="anotherFooService" class="x.y.service.ddl.DefaultDdlManager"/>

    <tx:advice id="defaultTxAdvice">
        <tx:attributes>
            <tx:method name="get*" read-only="true"/>
            <tx:method name="*"/>
        </tx:attributes>
    </tx:advice>
    
    <tx:advice id="noTxAdvice">
        <tx:attributes>
            <tx:method name="*" propagation="NEVER"/>
        </tx:attributes>
    </tx:advice>

    <!-- other transaction infrastructure beans such as a PlatformTransactionManager omitted... -->

</beans>

9.5.5. <tx:advice/> settings

This section summarises the various transactional settings that can be specified using the <tx:advice/> tag. The default <tx:advice/> settings are:

  • The propagation setting is REQUIRED

  • The isolation level is DEFAULT

  • The transaction is read/write

  • The transaction timeout defaults to the default timeout of the underlying transaction system, or or none if timeouts are not supported

  • Any RuntimeException will trigger rollback, and any checked Exception will not

These default settings can, of course, be changed; the various attributes of the <tx:method/> tags that are nested within <tx:advice/> and <tx:attributes/> tags are summarized below:

Table 9.1. <tx:method/> settings

AttributeRequired?DefaultDescription
nameYes 

The method name(s) with which the transaction attributes are to be associated. The wildcard (*) character can be used to associate the same transaction attribute settings with a number of methods; for example, 'get*', 'handle*', 'on*Event', etc.

propagationNoREQUIREDThe transaction propagation behavior
isolationNoDEFAULTThe transaction isolation level
timeoutNo-1The transaction timeout value (in seconds)
read-onlyNofalseIs this transaction read-only?
rollback-forNo 

The Exception(s) that will trigger rollback; comma-delimited. For example, 'com.foo.MyBusinessException,ServletException'

no-rollback-forNo 

The Exception(s) that will not trigger rollback; comma-delimited. For example, 'com.foo.MyBusinessException,ServletException'


At the time of writing it is not possible to have explicit control over the name of a transaction, where 'name' means the transaction name that will be shown in a transaction monitor, if applicable (for example, WebLogic's transaction monitor), and in logging output. For declarative transactions, the transaction name is always the fully-qualified class name + "." + method name of the transactionally-advised class. For example 'com.foo.BusinessService.handlePayment'.

9.5.6. Using @Transactional

[Note]Note

The functionality offered by the @Transactional annotation and the support classes is only available to you if you are using at least Java 5 (Tiger).

In addition to the XML-based declarative approach to transaction configuration, you can also use an annotation-based approach to transaction configuration. Declaring transaction semantics directly in the Java source code puts the declarations much closer to the affected code, and there is generally not much danger of undue coupling, since code that is meant to be used transactionally is almost always deployed that way anyway.

The ease-of-use afforded by the use of the @Transactional annotation is best illustrated with an example, after which all of the details will be explained. Consider the following class definition:

// the service class that we want to make transactional
@Transactional
public class DefaultFooService implements FooService {

    Foo getFoo(String fooName);

    Foo getFoo(String fooName, String barName);

    void insertFoo(Foo foo);

    void updateFoo(Foo foo);
}

When the above POJO is defined as a bean in a Spring IoC container, the bean instance can be made transactional by adding merely one line of XML configuration, like so:

<!-- from the file 'context.xml' -->
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:aop="http://www.springframework.org/schema/aop"
       xmlns:tx="http://www.springframework.org/schema/tx"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
       http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd
       http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd">
  
  <!-- this is the service object that we want to make transactional -->
  <bean id="fooService" class="x.y.service.DefaultFooService"/>

  <!-- enable the configuration of transactional behavior based on annotations -->
  <tx:annotation-driven transaction-manager="txManager"/>

  <!-- a PlatformTransactionManager is still required -->
  <bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
    <!-- (this dependency is defined somewhere else) -->
    <property name="dataSource" ref="dataSource"/>
  </bean>
  
  <!-- other <bean/> definitions here -->

</beans>
[Tip]Tip

You can actually omit the 'transaction-manager' attribute in the <tx:annotation-driven/> tag if the bean name of the PlatformTransactionManager that you want to wire in has the name 'transactionManager'. If the PlatformTransactionManager bean that you want to dependency inject has any other name, then you have to be explicit and use the 'transaction-manager' attribute as in the example above.

The @Transactional annotation may be placed before an interface definition, a method on an interface, a class definition, or a public method on a class. However, please note that the mere presence of the @Transactional annotation is not enough to actually turn on the transactional behavior - the @Transactional annotation is simply metadata that can be consumed by something that is @Transactional-aware and that can use the metadata to configure the appropriate beans with transactional behavior. In the case of the above example, it is the presence of the <tx:annotation-driven/> element that switches on the transactional behavior.

The Spring team's recommendation is that you only annotate concrete classes with the @Transactional annotation, as opposed to annotating interfaces. You certainly can place the @Transactional annotation on an interface (or an interface method), but this will only work as you would expect it to if you are using interface-based proxies. The fact that annotations are not inherited means that if you are using class-based proxies then the transaction settings will not be recognised by the class-based proxying infrastructure and the object will not be wrapped in a transactional proxy (which would be decidedly bad). So please do take the Spring team's advice and only annotate concrete classes (and the methods of concrete classes) with the @Transactional annotation.

Table 9.2. <tx:annotation-driven/> settings

AttributeRequired?DefaultDescription
transaction-managerNotransactionManager

The name of transaction manager to use. Only required if the name of the transaction manager is not transactionManager, as in the example above.

proxy-target-classNo 

Controls what type of transactional proxies are created for classes annotated with the @Transactional annotation. If "proxy-target-class" attribute is set to "true", then class-based proxies will be created. If "proxy-target-class" is "false" or if the attribute is omitted, then standard JDK interface-based proxies will be created. (See the section entitled Section 6.6, “Proxying mechanisms” for a detailed examination of the different proxy types.)

orderNo 

Defines the order of the transaction advice that will be applied to beans annotated with @Transactional. More on the rules related to ordering of AOP advice can be found in the AOP chapter (see section Section 6.2.4.7, “Advice ordering”). Note that not specifying any ordering will leave the decision as to what order advice is run in to the AOP subsystem.


[Note]Note

The "proxy-target-class" attribute on the <tx:annotation-driven/> element controls what type of transactional proxies are created for classes annotated with the @Transactional annotation. If "proxy-target-class" attribute is set to "true", then class-based proxies will be created. If "proxy-target-class" is "false" or if the attribute is omitted, then standard JDK interface-based proxies will be created. (See the section entitled Section 6.6, “Proxying mechanisms” for a detailed examination of the different proxy types.)

The most derived location takes precedence when evaluating the transactional settings for a method. In the case of the following example, the DefaultFooService class is annotated at the class level with the settings for a read-only transaction, but the @Transactional annotation on the updateFoo(Foo) method in the same class takes precedence over the transactional settings defined at the class level.

@Transactional(readOnly = true)
public class DefaultFooService implements FooService {

    public Foo getFoo(String fooName) {
        // do something
    }

    // these settings have precedence for this method
    @Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
    public void updateFoo(Foo foo) {
        // do something
    }
}

9.5.6.1. @Transactional settings

The @Transactional annotation is metadata that specifies that an interface, class, or method must have transactional semantics; for example, “start a brand new read-only transaction when this method is invoked, suspending any existing transaction”. The default @Transactional settings are:

  • The propagation setting is PROPAGATION_REQUIRED

  • The isolation level is ISOLATION_DEFAULT

  • The transaction is read/write

  • The transaction timeout defaults to the default timeout of the underlying transaction system, or or none if timeouts are not supported

  • Any RuntimeException will trigger rollback, and any checked Exception will not

These default settings can, of course, be changed; the various properties of the @Transactional annotation are summarized in the following table:

Table 9.3. @Transactional properties

PropertyTypeDescription
propagation enum: Propagationoptional propagation setting
isolation enum: Isolationoptional isolation level
readOnly booleanread/write vs. read-only transaction
timeout int (in seconds granularity)the transaction timeout
rollbackFor an array of Class objects, which must be derived from Throwablean optional array of exception classes which must cause rollback
rollbackForClassname an array of class names. Classes must be derived from Throwablean optional array of names of exception classes that must cause rollback
noRollbackFor an array of Class objects, which must be derived from Throwablean optional array of exception classes that must not cause rollback.
noRollbackForClassname an array of String class names, which must be derived from Throwablean optional array of names of exception classes that must not cause rollback


At the time of writing it is not possible to have explicit control over the name of a transaction, where 'name' means the transaction name that will be shown in a transaction monitor, if applicable (for example, WebLogic's transaction monitor), and in logging output. For declarative transactions, the transaction name is always the fully-qualified class name + "." + method name of the transactionally-advised class. For example 'com.foo.BusinessService.handlePayment'.

9.5.7. Advising transactional operations

Consider the situation where you have an instance of a class, and you would like to execute both transactional and (to keep things simple) some basic profiling advice. So how do you effect this in the context of using <tx:annotation-driven/>?

What we want to see when we invoke the updateFoo(Foo) method is:

  • the configured profiling aspect starting up,

  • then the transactional advice executing,

  • then the method on the advised object executing

  • then the transaction committing (we'll assume a sunny day scenario here),

  • and then finally the profiling aspect reporting (somehow) exactly how long the whole transactional method invocation took

[Note]Note

This chapter is not concerned with explaining AOP in any great detail (except as it applies to transactions). Please see the chapter entitled Chapter 6, Aspect Oriented Programming with Spring for detailed coverage of the various bits and pieces of the following AOP configuration (and AOP in general).

Here is the code for a simple profiling aspect. (The ordering of advice is controlled via the Ordered interface. For full details on advice ordering, see the section entitled Section 6.2.4.7, “Advice ordering”.)

package x.y;

import org.aspectj.lang.ProceedingJoinPoint;
import org.springframework.util.StopWatch;
import org.springframework.core.Ordered;

public class SimpleProfiler implements Ordered {

    private int order;

    // allows us to control the ordering of advice
    public int getOrder() {
        return this.order;
    }

    public void setOrder(int order) {
        this.order = order;
    }

    // this method is the around advice
    public Object profile(ProceedingJoinPoint call) throws Throwable {
        Object returnValue;
        StopWatch clock = new StopWatch(getClass().getName());
        try {
            clock.start(call.toShortString());
            returnValue = call.proceed();
        } finally {
            clock.stop();
            System.out.println(clock.prettyPrint());
        }
        return returnValue;
    }
}
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:aop="http://www.springframework.org/schema/aop"
       xmlns:tx="http://www.springframework.org/schema/tx"
       xsi:schemaLocation="
   http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
   http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd
   http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd">

    <bean id="fooService" class="x.y.service.DefaultFooService"/>

    <!-- this is the aspect -->
    <bean id="profiler" class="x.y.SimpleProfiler">
        <!-- execute before the transactional advice (hence the lower order number) -->
        <property name="order" value="1"/>
    </bean>
    
    <tx:annotation-driven transaction-manager="txManager"/>

    <aop:config>
        <!-- this advice will execute around the transactional advice -->
        <aop:aspect id="profilingAspect" ref="profiler">
            <aop:pointcut id="serviceMethodWithReturnValue"
                          expression="execution(!void x.y..*Service.*(..))"/>
            <aop:around method="profile" pointcut-ref="serviceMethodWithReturnValue"/>
        </aop:aspect>
    </aop:config>

    <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
        <property name="driverClassName" value="oracle.jdbc.driver.OracleDriver"/>
        <property name="url" value="jdbc:oracle:thin:@rj-t42:1521:elvis"/>
        <property name="username" value="scott"/>
        <property name="password" value="tiger"/>
    </bean>

    <bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
        <property name="dataSource" ref="dataSource"/>
    </bean>

</beans>

The result of the above configuration will be a 'fooService' bean that has profiling and transactional aspects applied to it in that order. The configuration of any number of additional aspects is effected in a similar fashion.

Finally, find below some example configuration for effecting the same setup as above, but using the purely XML declarative approach.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:aop="http://www.springframework.org/schema/aop"
       xmlns:tx="http://www.springframework.org/schema/tx"
       xsi:schemaLocation="
   http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
   http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd
   http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd">

    <bean id="fooService" class="x.y.service.DefaultFooService"/>

    <!-- the profiling advice -->
    <bean id="profiler" class="x.y.SimpleProfiler">
        <!-- execute before the transactional advice (hence the lower order number) -->
        <property name="order" value="1"/>
    </bean>

    <aop:config>
        
        <aop:pointcut id="entryPointMethod" expression="execution(* x.y..*Service.*(..))"/>

        <!-- will execute after the profiling advice (c.f. the order attribute) -->
        <aop:advisor
                advice-ref="txAdvice"
                pointcut-ref="entryPointMethod"
                order="2"/> <!-- order value is higher than the profiling aspect -->

        <aop:aspect id="profilingAspect" ref="profiler">
            <aop:pointcut id="serviceMethodWithReturnValue"
                          expression="execution(!void x.y..*Service.*(..))"/>
            <aop:around method="profile" pointcut-ref="serviceMethodWithReturnValue"/>
        </aop:aspect>

    </aop:config>

    <tx:advice id="txAdvice" transaction-manager="txManager">
        <tx:attributes>
            <tx:method name="get*" read-only="true"/>
            <tx:method name="*"/>
        </tx:attributes>
    </tx:advice>

    <!-- other <bean/> definitions such as a DataSource and a PlatformTransactionManager here -->

</beans>

The result of the above configuration will be a 'fooService' bean that has profiling and transactional aspects applied to it in that order. If we wanted the profiling advice to execute after the transactional advice on the way in, and before the transactional advice on the way out, then we would simply swap the value of the profiling aspect bean's 'order' property such that it was higher than the transactional advice's order value.

The configuration of any number of additional aspects is effected in a similar fashion.

9.5.8. Using @Transactional with AspectJ

It is also possible to use the Spring Framework's @Transactional support outside of a Spring container by means of an AspectJ aspect. To use this support you must first annotate your classes (and optionally your classes' methods with the @Transactional annotation, and then you must link (weave) your application with the org.springframework.transaction.aspectj.AnnotationTransactionAspect defined in the spring-aspects.jar file. The aspect must also be configured with a transaction manager. You could of course use the Spring Framework's IoC container to take care of dependency injecting the aspect, but since we're focusing here on applications running outside of a Spring container, we'll show you how to do it programmatically.

[Note]Note

Prior to continuing, you may well want to read the previous sections entitled Section 9.5.6, “Using @Transactional and Chapter 6, Aspect Oriented Programming with Spring respectively.

// construct an appropriate transaction manager 
DataSourceTransactionManager txManager = new DataSourceTransactionManager(getDataSource());

// configure the AnnotationTransactionAspect to use it; this must be done before executing any transactional methods
AnnotationTransactionAspect.aspectOf().setTransactionManager(txManager); 
[Note]Note

When using this aspect, you must annotate the implementation class (and/or methods within that class), not the interface (if any) that the class implements. AspectJ follows Java's rule that annotations on interfaces are not inherited.

The @Transactional annotation on a class specifies the default transaction semantics for the execution of any public method in the class.

The @Transactional annotation on a method within the class overrides the default transaction semantics given by the class annotation (if present). Any method may be annotated, regardless of visibility: annotating non-public methods directly is the only way to get transaction demarcation for the execution of such methods.

To weave your applications with the AnnotationTransactionAspect you must either build your application with AspectJ (see the AspectJ Development Guide) or use load-time weaving. See the section entitled Section 6.8.4, “Using AspectJ Load-time weaving (LTW) with Spring applications” for a discussion of load-time weaving with AspectJ.

9.6. Programmatic transaction management

The Spring Framework provides two means of programmatic transaction management:

  • Using the TransactionTemplate.

  • Using a PlatformTransactionManager implementation directly.

If you are going to use programmatic transaction management, the Spring team generally recommend, namely that of using the TransactionTemplate). The second approach is similar to using the JTA UserTransaction API (although exception handling is less cumbersome).

9.6.1. Using the TransactionTemplate

The TransactionTemplate adopts the same approach as other Spring templates such as the JdbcTemplate. It uses a callback approach, to free application code from having to do the boilerplate acquisition and release of transactional resources, and results in code that is intention driven, in that the code that is written focuses solely on what the developer wants to do.

[Note]Note

As you will immediately see in the examples that follow, using the TransactionTemplate absolutely couples you to Spring's transaction infrastructure and APIs. Whether or not programmatic transaction management is suitable for your development needs is a decision that you will have to make yourself.

Application code that must execute in a transactional context, and that will use the TransactionTemplate explicitly, looks like this. You, as an application developer, will write a TransactionCallback implementation (typically expressed as an anonymous inner class) that will contain all of the code that you need to have execute in the context of a transaction. You will then pass an instance of your custom TransactionCallback to the execute(..) method exposed on the TransactionTemplate.

public class SimpleService implements Service {

    // single TransactionTemplate shared amongst all methods in this instance
    private final TransactionTemplate transactionTemplate;

    // use constructor-injection to supply the PlatformTransactionManager
    public SimpleService(PlatformTransactionManager transactionManager) {
        Assert.notNull(transactionManager, "The 'transactionManager' argument must not be null.");
        this.transactionTemplate = new TransactionTemplate(transactionManager);
    }

    public Object someServiceMethod() {
        return transactionTemplate.execute(new TransactionCallback() {

            // the code in this method executes in a transactional context
            public Object doInTransaction(TransactionStatus status) {
                updateOperation1();
                return resultOfUpdateOperation2();
            }
        });
    }
}

If there is no return value, use the convenient TransactionCallbackWithoutResult class via an anonymous class like so:

transactionTemplate.execute(new TransactionCallbackWithoutResult() {
    
    protected void doInTransactionWithoutResult(TransactionStatus status) {
        updateOperation1();
        updateOperation2();
    }
});

Code within the callback can roll the transaction back by calling the setRollbackOnly() method on the supplied TransactionStatus object.

transactionTemplate.execute(new TransactionCallbackWithoutResult() {

    protected void doInTransactionWithoutResult(TransactionStatus status) {
        try {
            updateOperation1();
            updateOperation2();
        } catch (SomeBusinessExeption ex) {
            status.setRollbackOnly();
        }
    }
});

9.6.1.1. Specifying transaction settings

Transaction settings such as the propagation mode, the isolation level, the timeout, and so forth can be set on the TransactionTemplate either programmatically or in configuration. TransactionTemplate instances by default have the default transactional settings. Find below an example of programmatically customizing the transactional settings for a specific TransactionTemplate.

public class SimpleService implements Service {

    private final TransactionTemplate transactionTemplate;

    public SimpleService(PlatformTransactionManager transactionManager) {
        Assert.notNull(transactionManager, "The 'transactionManager' argument must not be null.");
        this.transactionTemplate = new TransactionTemplate(transactionManager);

        // the transaction settings can be set here explicitly if so desired
        this.transactionTemplate.setIsolationLevel(TransactionDefinition.ISOLATION_READ_UNCOMMITTED);
        this.transactionTemplate.setTimeout(30); // 30 seconds
        // and so forth...
    }
}

Find below an example of defining a TransactionTemplate with some custom transactional settings, using Spring XML configuration. The 'sharedTransactionTemplate' can then be injected into as many services as are required.

<bean id="sharedTransactionTemplate"
      class="org.springframework.transaction.support.TransactionTemplate">
    <property name="isolationLevelName" value="ISOLATION_READ_UNCOMMITTED"/>
    <property name="timeout" value="30"/>
</bean>"

Finally, instances of the TransactionTemplate class are threadsafe, in that instances do not maintain any conversational state. TransactionTemplate instances do however maintain configuration state, so while a number of classes may choose to share a single instance of a TransactionTemplate, if a class needed to use a TransactionTemplate with different settings (for example, a different isolation level), then two distinct TransactionTemplate instances would need to be created and used.

9.6.2. Using the PlatformTransactionManager

You can also use the org.springframework.transaction.PlatformTransactionManager directly to manage your transaction. Simply pass the implementation of the PlatformTransactionManager you're using to your bean via a bean reference. Then, using the TransactionDefinition and TransactionStatus objects you can initiate transactions, rollback and commit.

DefaultTransactionDefinition def = new DefaultTransactionDefinition();
// explicitly setting the transaction name is something that can only be done programmatically
def.setName("SomeTxName");
def.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED);

TransactionStatus status = txManager.getTransaction(def);
try {
    // execute your business logic here
}
catch (MyException ex) {
    txManager.rollback(status);
    throw ex;
}
txManager.commit(status);

9.7. Choosing between programmatic and declarative transaction management

Programmatic transaction management is usually a good idea only if you have a small number of transactional operations. For example, if you have a web application that require transactions only for certain update operations, you may not want to set up transactional proxies using Spring or any other technology. In this case, using the TransactionTemplate may be a good approach. Being able to set the transaction name explicitly is also something that can only be done using the programmatic approach to transaction management.

On the other hand, if your application has numerous transactional operations, declarative transaction management is usually worthwhile. It keeps transaction management out of business logic, and is not difficult to configure. When using the Spring Framework, rather than EJB CMT, the configuration cost of declarative transaction management is greatly reduced.

9.8. Application server-specific integration

Spring's transaction abstraction is generally application server agnostic. Additionally, Spring's JtaTransactionManager class, which can optionally perform a JNDI lookup for the JTA UserTransaction and TransactionManager objects, can be set to autodetect the location for the latter object, which varies by application server. Having access to the TransactionManager instance does allow enhanced transaction semantics. Please see the JtaTransactionManager Javadocs for more details.

9.8.1. BEA WebLogic

In a WebLogic 7.0, 8.1 or higher environment, you will generally prefer to use WebLogicJtaTransactionManager instead of the stock JtaTransactionManager class. This special WebLogic-specific subclass of the normal JtaTransactionManager. It supports the full power of Spring's transaction definitions in a WebLogic managed transaction environment, beyond standard JTA semantics: features include transaction names, per-transaction isolation levels, and proper resuming of transactions in all cases.

9.8.2. IBM WebSphere

In a WebSphere 5.1, 5.0 and 4 environment, you may wish to use Spring's WebSphereTransactionManagerFactoryBean class. This is a factory bean which retrieves the JTA TransactionManager in a WebSphere environment, which is done via WebSphere's static access methods. (These methods are different for each version of WebSphere.) Once the JTA TransactionManager instance has been obtained via this factory bean, Spring's JtaTransactionManager may be configured with a reference to it, for enhanced transaction semantics over the use of only the JTA UserTransaction object. Please see the Javadocs for full details.

9.9. Solutions to common problems

9.9.1. Use of the wrong transaction manager for a specific DataSource

You should take care to use the correct PlatformTransactionManager implementation for their requirements. It is important to understand how the the Spring Framework's transaction abstraction works with JTA global transactions. Used properly, there is no conflict here: the Spring Framework merely provides a straightforward and portable abstraction. If you are using global transactions, you must use the org.springframework.transaction.jta.JtaTransactionManager class (or an application server-specific subclass of it) for all your transactional operations. Otherwise the transaction infrastructure will attempt to perform local transactions on resources such as container DataSource instances. Such local transactions don't make sense, and a good application server will treat them as errors.

9.10. Further Resources

Find below links to further resources about the Spring Framework's transaction support.

  • This book, Java Transaction Design Strategies from InfoQ is a well-paced introduction to transactions in the Java space. It also includes side-by-side examples of how to configure and use transactions in both the Spring Framework and EJB3.

Chapter 10. DAO support

10.1. Introduction

The Data Access Object (DAO) support in Spring is primarily aimed at making it easy to work with data access technologies like JDBC, Hibernate or JDO in a standardized way. This allows one to switch between the aforementioned persistence technologies fairly easily and it also allows one to code without worrying about catching exceptions that are specific to each technology.

10.2. Consistent exception hierarchy

Spring provides a convenient translation from technology specific exceptions like SQLException to its own exception hierarchy with the DataAccessException as the root exception. These exceptions wrap the original exception so there is never any risk that one might lose any information as to what might have gone wrong.

In addition to JDBC exceptions, Spring can also wrap Hibernate-specific exceptions, converting them from proprietary, checked exceptions (in the case of versions of Hibernate prior to Hibernate 3.0), to a set of abstracted runtime exceptions (the same is true for JDO exceptions). This allows one to handle most persistence exceptions, which are non-recoverable, only in the appropriate layers, without annoying boilerplate catch and throw blocks, and exception declarations. (One can still trap and handle exceptions anywhere one needs to though.) As mentioned above, JDBC exceptions (including database-specific dialects) are also converted to the same hierarchy, meaning that one can perform some operations with JDBC within a consistent programming model.

The above holds true for the various template-based versions of the ORM access framework. If one uses the interceptor-based classes then the application must care about handling HibernateExceptions and JDOExceptions itself, preferably via delegating to SessionFactoryUtils' convertHibernateAccessException or convertJdoAccessException methods respectively. These methods convert the exceptions to ones that are compatible with the org.springframework.dao exception hierarchy. As JDOExceptions are unchecked, they can simply get thrown too, sacrificing generic DAO abstraction in terms of exceptions though.

The exception hierarchy that Spring uses is outlined in the following image:

(Please note that the class hierarchy detailed in the above image shows only a subset of the whole, rich, DataAccessException hierarchy.)

10.3. Consistent abstract classes for DAO support

To make it easier to work with a variety of data access technologies such as JDBC, JDO and Hibernate in a consistent way, Spring provides a set of abstract DAO classes that one can extend. These abstract classes have methods for providing the data source and any other configuration settings that are specific to the technology one currently is using.

Dao support classes:

  • JdbcDaoSupport - super class for JDBC data access objects. Requires a DataSource to be provided; in turn, this class provides a JdbcTemplate instance initialized from the supplied DataSource to subclasses.

  • HibernateDaoSupport - super class for Hibernate data access objects. Requires a SessionFactory to be provided; in turn, this class provides a HibernateTemplate instance initialized from the supplied SessionFactory to subclasses. Can alternatively be initialized directly via a HibernateTemplate, to reuse the latter's settings like SessionFactory, flush mode, exception translator, etc.

  • JdoDaoSupport - super class for JDBC data access objects. Requires a PersistenceManagerFactory to be provided; in turn, this class provides a JdoTemplate instance initialized from the supplied PersistenceManagerFactory to subclasses.

  • JpaDaoSupport - super class for JPA data access objects. Requires a EntityManagerFactory to be provided; in turn, this class provides a JpaTemplate instance initialized from the supplied EntityManagerFactory to subclasses.

Chapter 11. Data access using JDBC

11.1. Introduction

The value-add provided by the Spring Framework's JDBC abstraction framework is perhaps best shown by the following list (note that only the italicized lines need to be coded by an application developer):

  1. Define connection parameters

  2. Open the connection

  3. Specify the statement

  4. Prepare and execute the statement

  5. Set up the loop to iterate through the results (if any)

  6. Do the work for each iteration

  7. Process any exception

  8. Handle transactions

  9. Close the connection

The Spring Framework takes care of all the grungy, low-level details that can make JDBC such a tedious API to develop with.

11.1.1. The package hierarchy

The Spring Framework's JDBC abstraction framework consists of four different packages, namely core, dataSource, object, and support.

The org.springframework.jdbc.core package contains the JdbcTemplate class and its various callback interfaces, plus a variety of related classes.

The org.springframework.jdbc.datasource package contains a utility class for easy DataSource access, and various simple DataSource implementations that can be used for testing and running unmodified JDBC code outside of a J2EE container. The utility class provides static methods to obtain connections from JNDI and to close connections if necessary. It has support for thread-bound connections, e.g. for use with DataSourceTransactionManager.

Next, the org.springframework.jdbc.object package contains classes that represent RDBMS queries, updates, and stored procedures as thread safe, reusable objects. This approach is modeled by JDO, although of course objects returned by queries are “disconnected” from the database. This higher level of JDBC abstraction depends on the lower-level abstraction in the org.springframework.jdbc.core package.

Finally the org.springframework.jdbc.support package is where you find the SQLException translation functionality and some utility classes.

Exceptions thrown during JDBC processing are translated to exceptions defined in the org.springframework.dao package. This means that code using the Spring JDBC abstraction layer does not need to implement JDBC or RDBMS-specific error handling. All translated exceptions are unchecked giving you the option of catching the exceptions that you can recover from while allowing other exceptions to be propagated to the caller.

11.2. Using the JDBC Core classes to control basic JDBC processing and error handling

11.2.1. JdbcTemplate

The JdbcTemplate class is the central class in the JDBC core package. It simplifies the use of JDBC since it handles the creation and release of resources. This helps to avoid common errors such as forgetting to always close the connection. It executes the core JDBC workflow like statement creation and execution, leaving application code to provide SQL and extract results. This class executes SQL queries, update statements or stored procedure calls, imitating iteration over ResultSets and extraction of returned parameter values. It also catches JDBC exceptions and translates them to the generic, more informative, exception hierarchy defined in the org.springframework.dao package.

Code using the JdbcTemplate only need to implement callback interfaces, giving them a clearly defined contract. The PreparedStatementCreator callback interface creates a prepared statement given a Connection provided by this class, providing SQL and any necessary parameters. The same is true for the CallableStatementCreator interface which creates callable statement. The RowCallbackHandler interface extracts values from each row of a ResultSet.

The JdbcTemplate can be used within a DAO implementation via direct instantiation with a DataSource reference, or be configured in a Spring IOC container and given to DAOs as a bean reference. Note: the DataSource should always be configured as a bean in the Spring IoC container, in the first case given to the service directly, in the second case to the prepared template.

Finally, all of the SQL issued by this class is logged at the 'DEBUG' level under the category corresponding to the fully qualified class name of the template instance (typically JdbcTemplate, but it may be different if a custom subclass of the JdbcTemplate class is being used).

11.2.1.1. Examples

Find below some examples of using the JdbcTemplate class. (These examples are not an exhaustive list of all of the functionality exposed by the JdbcTemplate; see the attendant Javadocs for that).

11.2.1.1.1. Querying (SELECT)

A simple query for getting the number of rows in a relation.

int rowCount = this.jdbcTemplate.queryForInt("select count(0) from t_accrual");

A simple query using a bind variable.

int countOfActorsNamedJoe
    = this.jdbcTemplate.queryForInt("select count(0) from t_actors where first_name = ?", new Object[]{"Joe"});

Querying for a String.

String surname = (String) this.jdbcTemplate
    .queryForObject("select surname from t_actor where id = ?", new Object[]{new Long(1212)}, String.class);

Querying and populating a single domain object.

Actor actor = (Actor) this.jdbcTemplate.queryForObject(
    "select first_name, surname from t_actor where id = ?",
    new Object[]{new Long(1212)},
    new RowMapper() {

        public Object mapRow(ResultSet rs, int rowNum) throws SQLException {
            Actor actor = new Actor();
            actor.setFirstName(rs.getString("first_name"));
            actor.setSurname(rs.getString("surname"));
            return actor;
        }
    });

Querying and populating a number of domain objects.

Collection actors = this.jdbcTemplate.query(
    "select first_name, surname from t_actor",
    new RowMapper() {

        public Object mapRow(ResultSet rs, int rowNum) throws SQLException {
            Actor actor = new Actor();
            actor.setFirstName(rs.getString("first_name"));
            actor.setSurname(rs.getString("surname"));
            return actor;
        }
    });

If the last two snippets of code actually existed in the same application, it would make sense to remove the duplication present in the two RowMapper anonymous inner classes, and extract them out into a single class (typically a static inner class) that can then be referenced by DAO methods as needed. For example, the last code snippet might be better off written like so:

public Collection findAllActors() {
    return this.jdbcTemplate.query( "select first_name, surname from t_actor", new ActorMapper());
}

private static final class ActorMapper implements RowMapper {

    public Object mapRow(ResultSet rs, int rowNum) throws SQLException {
        Actor actor = new Actor();
        actor.setFirstName(rs.getString("first_name"));
        actor.setSurname(rs.getString("surname"));
        return actor;
    }
}
11.2.1.1.2. Updating (INSERT/UPDATE/DELETE)
this.jdbcTemplate.update("insert into t_actor (first_name, surname) values (?, ?)", new Object[] {"Leonor", "Watling"});
this.jdbcTemplate.update("update t_actor set weapon = ? where id = ?", new Object[] {"Banjo", new Long(5276)});
this.jdbcTemplate.update("delete from orders"); // :)
11.2.1.1.3. Other operations

The execute(..) method can be used to execute any arbitrary SQL, and as such is often used for DDL statements. It is heavily overloaded with variants taking callback interfaces, bind variable arrays, and suchlike.

this.jdbcTemplate.execute("create table mytable (id integer, name varchar(100))");

Invoking a simple stored procedure (more sophisticated stored procedure support is covered later).

this.jdbcTemplate.update("call SUPPORT.REFRESH_ACTORS_SUMMARY(?)", new Object[]{new Long(unionId)});

11.2.1.2. JdbcTemplate idioms (best practices)

Instances of the JdbcTemplate class are threadsafe once configured. This is important because it means that you can configure a single instance of a JdbcTemplate and then safely inject this shared reference into multiple DAOs (or repositories). To be clear, the JdbcTemplate is stateful, in that it maintains a reference to a DataSource, but this state is not conversational state.

A common idiom when using the JdbcTemplate class (and the associated SimpleJdbcTemplate and NamedParameterJdbcTemplate classes) is to configure a DataSource in your Spring configuration file, and then dependency inject that shared DataSource bean into your DAO classes; the JdbcTemplate is created in the setter for the DataSource. This leads to DAOs that look in part like this:

public class JdbcCorporateEventDao implements CorporateEventDao {

    private JdbcTemplate jdbcTemplate;

    public void setDataSource(DataSource dataSource) {
        this.jdbcTemplate = new JdbcTemplate(dataSource);
    }

    // JDBC-backed implementations of the methods on the CorporateEventDao follow...
}

The attendant configuration might look like this.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans/spring-beans-2.0.xsd">

    <bean id="corporateEventDao" class="com.example.JdbcCorporateEventDao">
        <property name="dataSource" ref="dataSource"/>
    </bean>

    <!-- the DataSource (parameterized for configuration via a PropertyPlaceHolderConfigurer) -->
    <bean id="dataSource" destroy-method="close" class="org.apache.commons.dbcp.BasicDataSource">
        <property name="driverClassName" value="${jdbc.driverClassName}"/>
        <property name="url" value="${jdbc.url}"/>
        <property name="username" value="${jdbc.username}"/>
        <property name="password" value="${jdbc.password}"/>
    </bean>

</beans>

If you are using Spring's JdbcDaoSupport class, and your various JDBC-backed DAO classes extend from it, then you inherit a setDataSource(..) method for free from said superclass. It is totally up to you as to whether or not you inherit from said class, you certainly are not forced to. If you look at the source for the JdbcDaoSupport class you will see that there is not a whole lot to it... it is provided as a convenience only.

Regardless of which of the above template initialization styles you choose to use (or not), there is (almost) certainly no need to create a brand new instance of a JdbcTemplate class each and every time you wish to execute some SQL... remember, once configured, a JdbcTemplate instance is threadsafe. A reason for wanting multiple JdbcTemplate instances would be when you have an application that accesses multiple databases, which requires multiple DataSources, and subsequently multiple differently configured JdbcTemplates.

11.2.2. NamedParameterJdbcTemplate

The NamedParameterJdbcTemplate class adds support for programming JDBC statements using named parameters (as opposed to programming JDBC statements using only classic placeholder ('?') arguments. The NamedParameterJdbcTemplate class wraps a JdbcTemplate, and delegates to the wrapped JdbcTemplate to do much of its work. This section will describe only those areas of the NamedParameterJdbcTemplate class that differ from the JdbcTemplate itself; namely, programming JDBC statements using named parameters.

// some JDBC-backed DAO class...
private NamedParameterJdbcTemplate namedParameterJdbcTemplate;

public void setDataSource(DataSource dataSource) {
    this.namedParameterJdbcTemplate = new NamedParameterJdbcTemplate(dataSource);
}

public int countOfActorsByFirstName(String firstName) {

    String sql = "select count(0) from T_ACTOR where first_name = :first_name";

    SqlParameterSource namedParameters = new MapSqlParameterSource("first_name", firstName);

    return namedParameterJdbcTemplate.queryForInt(sql, namedParameters);
}

Notice the use of the named parameter notation in the value assigned to the 'sql' variable, and the corresponding value that is plugged into the 'namedParameters' variable (of type MapSqlParameterSource).

If you like, you can also pass along named parameters (and their corresponding values) to a NamedParameterJdbcTemplate instance using the (perhaps more familiar) Map-based style. (The rest of the methods exposed by the NamedParameterJdbcOperations - and implemented by the NamedParameterJdbcTemplate class) follow a similar pattern and will not be covered here.)

// some JDBC-backed DAO class...
private NamedParameterJdbcTemplate namedParameterJdbcTemplate;

public void setDataSource(DataSource dataSource) {
    this.namedParameterJdbcTemplate = new NamedParameterJdbcTemplate(dataSource);
}

public int countOfActorsByFirstName(String firstName) {

    String sql = "select count(0) from T_ACTOR where first_name = :first_name";

    Map namedParameters = Collections.singletonMap("first_name", firstName);

    return this.namedParameterJdbcTemplate.queryForInt(sql, namedParameters);
}

Another nice feature related to the NamedParameterJdbcTemplate (and existing in the same Java package) is the SqlParameterSource interface. You have already seen an example of an implementation of this interface in one of the preceding code snippets (the MapSqlParameterSource class). The entire point of the SqlParameterSource is to serve as a source of named parameter values to a NamedParameterJdbcTemplate. The MapSqlParameterSource class is a very simple implementation, that is simply an adapter around a java.util.Map, where the keys are the paramter names and the values are the parameter values.

Another SqlParameterSource implementation is the BeanPropertySqlParameterSource class. This class wraps an arbitrary JavaBean (that is, an instance of a class that adheres to the JavaBean conventions), and uses the properties of the wrapped JavaBean as the source of named parameter values.

public class Actor {

    private Long id;
    private String firstName;
    private String lastName;
    
    public String getFirstName() {
        return this.firstName;
    }
    
    public String getLastName() {
        return this.lastName;
    }
    
    public Long getId() {
        return this.id;
    }
    
    // setters omitted...

}
// some JDBC-backed DAO class...
private NamedParameterJdbcTemplate namedParameterJdbcTemplate;

public void setDataSource(DataSource dataSource) {
    this.namedParameterJdbcTemplate = new NamedParameterJdbcTemplate(dataSource);
}

public int countOfActors(Actor exampleActor) {

    // notice how the named parameters match the properties of the above 'Actor' class
    String sql = "select count(0) from T_ACTOR where first_name = :firstName and last_name = :lastName";

    SqlParameterSource namedParameters = new BeanPropertySqlParameterSource(exampleActor);

    return this.namedParameterJdbcTemplate.queryForInt(sql, namedParameters);
}

Remember that the NamedParameterJdbcTemplate class wraps a classic JdbcTemplate template; if you need access to the wrapped JdbcTemplate instance (to access some of the functionality only present in the JdbcTemplate class), then you can use the getJdbcOperations() method to access the wrapped JdbcTemplate via the JdbcOperations interface.

See also the section entitled Section 11.2.1.2, “JdbcTemplate idioms (best practices)” for some advice on how to best use the NamedParameterJdbcTemplate class in the context of an application.

11.2.3. SimpleJdbcTemplate

[Note]Note

The functionality offered by the SimpleJdbcTemplate is only available to you if you are using Java 5 (Tiger).

The SimpleJdbcTemplate class is a wrapper around the classic JdbcTemplate that takes advantage of Java 5 language features such as varargs and autoboxing. The SimpleJdbcTemplate class is somewhat of a sop to the syntactic-sugar-like features of Java 5, but as anyone who has developed on Java 5 and then had to move back to developing on a previous version of the JDK will know, those syntactic-sugar-like features sure are nice.

The value-add of the SimpleJdbcTemplate class in the area of syntactic-sugar is best illustrated with a 'before and after' example. The following code snippet shows first some data access code using the classic JdbcTemplate, followed immediately thereafter by a code snippet that does the same job, only this time using the SimpleJdbcTemplate.

// classic JdbcTemplate-style...
private JdbcTemplate jdbcTemplate;

public void setDataSource(DataSource dataSource) {
    this.jdbcTemplate = new JdbcTemplate(dataSource);
}

public Actor findActor(long id) {
    String sql = "select id, first_name, last_name from T_ACTOR where id = ?";
    
    RowMapper mapper = new RowMapper() {
    
        public Object mapRow(ResultSet rs, int rowNum) throws SQLException {
            Actor actor = new Actor();
            actor.setId(rs.getLong("id"));
            actor.setFirstName(rs.getString("first_name"));
            actor.setLastName(rs.getString("last_name"));
            return actor;
        }
    };
    
    // notice the cast, the wrapping up of the 'id' argument
    // in an array, and the boxing of the 'id' argument as a reference type
    return (Actor) jdbcTemplate.queryForObject(sql, mapper, new Object[] {Long.valueOf(id)});
}

Here is the same method, only this time using the SimpleJdbcTemplate; notice how much 'cleaner'the code is.

// SimpleJdbcTemplate-style...
private SimpleJdbcTemplate simpleJdbcTemplate;

public void setDataSource(DataSource dataSource) {
    this.simpleJdbcTemplate = new SimpleJdbcTemplate(dataSource);
}
public Actor findActor(long id) {
    String sql = "select id, first_name, last_name from T_ACTOR where id = ?";

    ParameterizedRowMapper<Actor> mapper = new ParameterizedRowMapper<Actor>() {
    
        // notice the return type with respect to Java 5 covariant return types
        public Actor mapRow(ResultSet rs, int rowNum) throws SQLException {
            Actor actor = new Actor();
            actor.setId(rs.getLong("id"));
            actor.setFirstName(rs.getString("first_name"));
            actor.setLastName(rs.getString("last_name"));
            return actor;
        }
    };

    return this.simpleJdbcTemplate.queryForObject(sql, mapper, id);
}

See also the section entitled Section 11.2.1.2, “JdbcTemplate idioms (best practices)” for some advice on how to best use the SimpleJdbcTemplate class in the context of an application.

[Note]Note

The SimpleJdbcTemplate class only offers a much smaller subset of the methods exposed on the JdbcTemplate class. If you need to use a method from the JdbcTemplate that is not defined on the SimpleJdbcTemplate, you can always access the underlying JdbcTemplate by calling the getJdbcOperations() method on the SimpleJdbcTemplate, which will then allow you to invoke the method that you want. The only downside is that the methods on the JdbcOperations interface are not generified, so you are back to casting and such again.

11.2.4. DataSource

In order to work with data from a database, one needs to obtain a connection to the database. The way Spring does this is through a DataSource. A DataSource is part of the JDBC specification and can be seen as a generalized connection factory. It allows a container or a framework to hide connection pooling and transaction management issues from the application code. As a developer, you don not need to know any details about how to connect to the database, that is the responsibility for the administrator that sets up the datasource. You will most likely have to fulfill both roles while you are developing and testing you code though, but you will not necessarily have to know how the production data source is configured.

When using Spring's JDBC layer, you can either obtain a data source from JNDI or you can configure your own, using an implementation that is provided in the Spring distribution. The latter comes in handy for unit testing outside of a web container. We will use the DriverManagerDataSource implementation for this section but there are several additional implementations that will be covered later on. The DriverManagerDataSource works the same way that you probably are used to work when you obtain a JDBC connection. You have to specify the fully qualified class name of the JDBC driver that you are using so that the DriverManager can load the driver class. Then you have to provide a url that varies between JDBC drivers. You have to consult the documentation for your driver for the correct value to use here. Finally you must provide a username and a password that will be used to connect to the database. Here is an example of how to configure a DriverManagerDataSource:

DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("org.hsqldb.jdbcDriver");
dataSource.setUrl("jdbc:hsqldb:hsql://localhost:");
dataSource.setUsername("sa");
dataSource.setPassword("");

11.2.5. SQLExceptionTranslator

SQLExceptionTranslator is an interface to be implemented by classes that can translate between SQLExceptions and Spring's own data-access-strategy-agnostic org.springframework.dao.DataAccessException. Implementations can be generic (for example, using SQLState codes for JDBC) or proprietary (for example, using Oracle error codes) for greater precision.

SQLErrorCodeSQLExceptionTranslator is the implementation of SQLExceptionTranslator that is used by default. This implementation uses specific vendor codes. More precise than SQLState implementation, but vendor specific. The error code translations are based on codes held in a JavaBean type class named SQLErrorCodes. This class is created and populated by an SQLErrorCodesFactory which as the name suggests is a factory for creating SQLErrorCodes based on the contents of a configuration file named 'sql-error-codes.xml'. This file is populated with vendor codes and based on the DatabaseProductName taken from the DatabaseMetaData, the codes for the current database are used.

The SQLErrorCodeSQLExceptionTranslator applies the following matching rules:

  • Try custom translation implemented by any subclass. Note that this class is concrete and is typically used itself, in which case this rule does not apply.

  • Apply error code matching. Error codes are obtained from the SQLErrorCodesFactory by default. This looks up error codes from the classpath and keys into them from the database name from the database metadata.

  • Use the fallback translator. SQLStateSQLExceptionTranslator is the default fallback translator.

SQLErrorCodeSQLExceptionTranslator can be extended the following way:

public class MySQLErrorCodesTranslator extends SQLErrorCodeSQLExceptionTranslator {

    protected DataAccessException customTranslate(String task, String sql, SQLException sqlex) {
        if (sqlex.getErrorCode() == -12345) {
            return new DeadlockLoserDataAccessException(task, sqlex);
        }
        return null;
    }
}

In this example the specific error code '-12345' is translated and any other errors are simply left to be translated by the default translator implementation. To use this custom translator, it is necessary to pass it to the JdbcTemplate using the method setExceptionTranslator and to use this JdbcTemplate for all of the data access processing where this translator is needed. Here is an example of how this custom translator can be used:

// create a JdbcTemplate and set data source 
JdbcTemplate jt = new JdbcTemplate(); 
jt.setDataSource(dataSource); 
// create a custom translator and set the DataSource for the default translation lookup 
MySQLErrorCodesTransalator tr = new MySQLErrorCodesTransalator(); 
tr.setDataSource(dataSource); 
jt.setExceptionTranslator(tr); 
// use the JdbcTemplate for this SqlUpdate
SqlUpdate su = new SqlUpdate(); 
su.setJdbcTemplate(jt); 
su.setSql("update orders set shipping_charge = shipping_charge * 1.05"); 
su.compile(); 
su.update();

The custom translator is passed a data source because we still want the default translation to look up the error codes in sql-error-codes.xml.

11.2.6. Executing statements

To execute an SQL statement, there is very little code needed. All you need is a DataSource and a JdbcTemplate. Once you have that, you can use a number of convenience methods that are provided with the JdbcTemplate. Here is a short example showing what you need to include for a minimal but fully functional class that creates a new table.

import javax.sql.DataSource;
import org.springframework.jdbc.core.JdbcTemplate;

public class ExecuteAStatement {

    private JdbcTemplate jdbcTemplate;

    public void setDataSource(DataSource dataSource) {
        this.jdbcTemplate = new JdbcTemplate(dataSource);
    }

    public void doExecute() {
        this.jdbcTemplate.execute("create table mytable (id integer, name varchar(100))");
    }
}

11.2.7. Running Queries

In addition to the execute methods, there is a large number of query methods. Some of these methods are intended to be used for queries that return a single value. Maybe you want to retrieve a count or a specific value from one row. If that is the case then you can use queryForInt(..), queryForLong(..) or queryForObject(..). The latter will convert the returned JDBC Type to the Java class that is passed in as an argument. If the type conversion is invalid, then an InvalidDataAccessApiUsageException will be thrown. Here is an example that contains two query methods, one for an int and one that queries for a String.

import javax.sql.DataSource;
import org.springframework.jdbc.core.JdbcTemplate;

public class RunAQuery {

    private JdbcTemplate jdbcTemplate;

    public void setDataSource(DataSource dataSource) {
        this.jdbcTemplate = new JdbcTemplate(dataSource);
    }
  
    public int getCount() {
        return this.jdbcTemplate.queryForInt("select count(*) from mytable");
    }

    public String getName() {
        return (String) this.jdbcTemplate.queryForObject("select name from mytable", String.class);
    }

    public void setDataSource(DataSource dataSource) {
        this.dataSource = dataSource;
    }
}

In addition to the single results query methods there are several methods that return a List with an entry for each row that the query returned. The most generic method is queryForList(..) which returns a List where each entry is a Map with each entry in the map representing the column value for that row. If we add a method to the above example to retrieve a list of all the rows, it would look like this:

private JdbcTemplate jdbcTemplate;

public void setDataSource(DataSource dataSource) {
    this.jdbcTemplate = new JdbcTemplate(dataSource);
}

public List getList() {
    return this.jdbcTemplate.queryForList("select * from mytable");
}

The list returned would look something like this:

[{name=Bob, id=1}, {name=Mary, id=2}]

11.2.8. Updating the database

There are also a number of update methods that you can use. Find below an example where a column is updated for a certain primary key. In this example an SQL statement is used that has placeholders for row parameters. Note that the parameter values are passed in as an array of objects (and thus primitives have to be wrapped in the primitive wrapper classes).

import javax.sql.DataSource;

import org.springframework.jdbc.core.JdbcTemplate;

public class ExecuteAnUpdate {

    private JdbcTemplate jdbcTemplate;

    public void setDataSource(DataSource dataSource) {
        this.jdbcTemplate = new JdbcTemplate(dataSource);
    }

    public void setName(int id, String name) {
        this.jdbcTemplate.update("update mytable set name = ? where id = ?", new Object[] {name, new Integer(id)});
    }
}

One of the update convenience methods provides support for acquiring the primary keys generated by the database (part of the JDBC 3.0 standard - see chapter 13.6 of the specification for details). The method takes a PreparedStatementCreator as its first argument, and this is the way the required insert statement is specified. The other argument is a KeyHolder, which will contain the generated key on successful return from the update. There is not a standard single way to create an appropriate PreparedStatement (which explains why the method signature is the way it is). An example that works on Oracle and may work on other platforms is:

final String INSERT_SQL = "insert into my_test (name) values(?)";
final String name = "Rob";

KeyHolder keyHolder = new GeneratedKeyHolder();
jdbcTemplate.update(
    new PreparedStatementCreator() {
        public PreparedStatement createPreparedStatement(Connection connection) throws SQLException {
            PreparedStatement ps =
                connection.prepareStatement(INSERT_SQL, new String[] {"id"});
            ps.setString(1, name);
            return ps;
        }
    },
    keyHolder);

// keyHolder.getKey() now contains the generated key

11.3. Controlling database connections

11.3.1. DataSourceUtils

The DataSourceUtils class is a convenient and powerful helper class that provides static methods to obtain connections from JNDI and close connections if necessary. It has support for thread-bound connections, for example for use with DataSourceTransactionManager.

[Note]Note

The getDataSourceFromJndi(..) methods are targeted at applications that do not use the Spring Framework's IoC container. With the latter, it is preferable to preconfigure one's beans or even JdbcTemplate instances in the container itself: a JndiObjectFactoryBean instance can be used to fetch a DataSource from JNDI and give the DataSource bean reference to other beans. Switching to another DataSource then becomes just a matter of configuration; one can even replace the definition of the FactoryBean with a non-JNDI DataSource!

11.3.2. SmartDataSource

The SmartDataSource interface is to be implemented by classes that can provide a connection to a relational database. Extends the DataSource interface to allow classes using it to query whether or not the connection should be closed after a given operation. This can sometimes be useful for efficiency, in the cases where one knows that one wants to reuse a connection.

11.3.3. AbstractDataSource

This is an abstract base class for Spring's DataSource implementations, that takes care of the "uninteresting" glue. This is the class one would extend if one was writing one's own DataSource implementation.

11.3.4. SingleConnectionDataSource

The SingleConnectionDataSource class is an implementation of the SmartDataSource interface that wraps a single Connection that is not closed after use. Obviously, this is not multi-threading capable.

If client code will call close in the assumption of a pooled connection, like when using persistence tools, set suppressClose to true. This will return a close-suppressing proxy instead of the physical connection. Be aware that you will not be able to cast this to a native Oracle Connection or the like anymore.

This is primarily a test class. For example, it enables easy testing of code outside an application server, in conjunction with a simple JNDI environment. In contrast to DriverManagerDataSource, it reuses the same connection all the time, avoiding excessive creation of physical connections.

11.3.5. DriverManagerDataSource

The DriverManagerDataSource class is an implementation of the SmartDataSource interface that configures a plain old JDBC Driver via bean properties, and returns a new connection every time.

This is potentially useful for test or standalone environments outside of a J2EE container, either as a DataSource bean in a Spring IoC container, or in conjunction with a simple JNDI environment. Pool-assuming Connection.close() calls will simply close the connection, so any DataSource-aware persistence code should work. However, using JavaBean style connection pools such as commons-dbcp is so easy, even in a test environment, that it is almost always preferable to use such a connection pool over DriverManagerDataSource.

11.3.6. TransactionAwareDataSourceProxy

TransactionAwareDataSourceProxy is a proxy for a target DataSource, which wraps that target DataSource to add awareness of Spring-managed transactions. In this respect it is similar to a transactional JNDI DataSource as provided by a J2EE server.

[Note]Note

It should almost never be necessary or desireable to use this class, except when existing code exists which must be called and passed a standard JDBC DataSource interface implementation. In this case, it's possible to still have this code be usable, but participating in Spring managed transactions. It is generally preferable to write your own new code using the higher level abstractions for resource management, such as JdbcTemplate or DataSourceUtils.

(See the TransactionAwareDataSourceProxy Javadocs for more details.)

11.3.7. DataSourceTransactionManager

The DataSourceTransactionManager class is a PlatformTransactionManager implementation for single JDBC datasources. It binds a JDBC connection from the specified data source to the currently executing thread, potentially allowing for one thread connection per data source.

Application code is required to retrieve the JDBC connection via DataSourceUtils.getConnection(DataSource) instead of J2EE's standard DataSource.getConnection. This is recommended anyway, as it throws unchecked org.springframework.dao exceptions instead of checked SQLExceptions. All framework classes like JdbcTemplate use this strategy implicitly. If not used with this transaction manager, the lookup strategy behaves exactly like the common one - it can thus be used in any case.

The DataSourceTransactionManager class supports custom isolation levels, and timeouts that get applied as appropriate JDBC statement query timeouts. To support the latter, application code must either use JdbcTemplate or call DataSourceUtils.applyTransactionTimeout(..) method for each created statement.

This implementation can be used instead of JtaTransactionManager in the single resource case, as it does not require the container to support JTA. Switching between both is just a matter of configuration, if you stick to the required connection lookup pattern. Note that JTA does not support custom isolation levels!

11.4. Modeling JDBC operations as Java objects

The org.springframework.jdbc.object package contains classes that allow one to access the database in a more object-oriented manner. By way of an example, one can execute queries and get the results back as a list containing business objects with the relational column data mapped to the properties of the business object. One can also execute stored procedures and run update, delete and insert statements.

[Note]Note

There is a view borne from experience acquired in the field amongst some of the Spring developers that the various RDBMS operation classes described below (with the exception of the StoredProcedure class) can often be replaced with straight JdbcTemplate calls... often it is simpler to use and plain easier to read a DAO method that simply calls a method on a JdbcTemplate direct (as opposed to encapsulating a query as a full-blown class).

It must be stressed however that this is just a view... if you feel that you are getting measurable value from using the RDBMS operation classes, feel free to continue using these classes.

11.4.1. SqlQuery

SqlQuery is a reusable, threadsafe class that encapsulates an SQL query. Subclasses must implement the newRowMapper(..) method to provide a RowMapper instance that can create one object per row obtained from iterating over the ResultSet that is created during the execution of the query. The SqlQuery class is rarely used directly since the MappingSqlQuery subclass provides a much more convenient implementation for mapping rows to Java classes. Other implementations that extend SqlQuery are MappingSqlQueryWithParameters and UpdatableSqlQuery.

11.4.2. MappingSqlQuery

MappingSqlQuery is a reusable query in which concrete subclasses must implement the abstract mapRow(..) method to convert each row of the supplied ResultSet into an object. Find below a brief example of a custom query that maps the data from the customer relation to an instance of the Customer class.

private class CustomerMappingQuery extends MappingSqlQuery {

    public CustomerMappingQuery(DataSource ds) {
        super(ds, "SELECT id, name FROM customer WHERE id = ?");
        super.declareParameter(new SqlParameter("id", Types.INTEGER));
        compile();
    }

    public Object mapRow(ResultSet rs, int rowNumber) throws SQLException {
        Customer cust = new Customer();
        cust.setId((Integer) rs.getObject("id"));
        cust.setName(rs.getString("name"));
        return cust;
    } 
}

We provide a constructor for this customer query that takes the DataSource as the only parameter. In this constructor we call the constructor on the superclass with the DataSource and the SQL that should be executed to retrieve the rows for this query. This SQL will be used to create a PreparedStatement so it may contain place holders for any parameters to be passed in during execution. Each parameter must be declared using the declareParameter method passing in an SqlParameter. The SqlParameter takes a name and the JDBC type as defined in java.sql.Types. After all parameters have been defined we call the compile() method so the statement can be prepared and later be executed.

public Customer getCustomer(Integer id) {
    CustomerMappingQuery custQry = new CustomerMappingQuery(dataSource); 
    Object[] parms = new Object[1];
    parms[0] = id;
    List customers = custQry.execute(parms);
    if (customers.size() > 0) {
        return (Customer) customers.get(0);
    }
    else {
        return null;
    }
}

The method in this example retrieves the customer with the id that is passed in as the only parameter. After creating an instance of the CustomerMappingQuery class we create an array of objects that will contain all parameters that are passed in. In this case there is only one parameter and it is passed in as an Integer. Now we are ready to execute the query using this array of parameters and we get a List that contains a Customer object for each row that was returned for our query. In this case it will only be one entry if there was a match.

11.4.3. SqlUpdate

The SqlUpdate class encapsulates an SQL update. Like a query, an update object is reusable, and like all RdbmsOperation classes, an update can have parameters and is defined in SQL. This class provides a number of update(..) methods analogous to the execute(..) methods of query objects. This class is concrete. Although it can be subclassed (for example to add a custom update method) it can easily be parameterized by setting SQL and declaring parameters.

import java.sql.Types;

import javax.sql.DataSource;

import org.springframework.jdbc.core.SqlParameter;
import org.springframework.jdbc.object.SqlUpdate;

public class UpdateCreditRating extends SqlUpdate {

    public UpdateCreditRating(DataSource ds) {
        setDataSource(ds);
        setSql("update customer set credit_rating = ? where id = ?");
        declareParameter(new SqlParameter(Types.NUMERIC));
        declareParameter(new SqlParameter(Types.NUMERIC));
        compile();
    }

    /**
     * @param id for the Customer to be updated
     * @param rating the new value for credit rating
     * @return number of rows updated
     */
    public int run(int id, int rating) {
        Object[] params =
            new Object[] {
                new Integer(rating),
                new Integer(id)};
        return update(params);
    }
}

11.4.4. StoredProcedure

The StoredProcedure class is a superclass for object abstractions of RDBMS stored procedures. This class is abstract, and its various execute(..) methods have protected access, preventing use other than through a subclass that offers tighter typing.

The inherited sql property will be the name of the stored procedure in the RDBMS. Note that JDBC 3.0 introduces named parameters, although the other features provided by this class are still necessary in JDBC 3.0.

Here is an example of a program that calls a function, sysdate(), that comes with any Oracle database. To use the stored procedure functionality one has to create a class that extends StoredProcedure. There are no input parameters, but there is an output parameter that is declared as a date type using the class SqlOutParameter. The execute() method returns a map with an entry for each declared output parameter using the parameter name as the key.

import java.sql.Types;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;

import javax.sql.DataSource;

import org.springframework.jdbc.core.SqlOutParameter;
import org.springframework.jdbc.datasource.*;
import org.springframework.jdbc.object.StoredProcedure;

public class TestStoredProcedure {

    public static void main(String[] args)  {
        TestStoredProcedure t = new TestStoredProcedure();
        t.test();
        System.out.println("Done!");
    }
    
    void test() {
        DriverManagerDataSource ds = new DriverManagerDataSource();
        ds.setDriverClassName("oracle.jdbc.OracleDriver");
        ds.setUrl("jdbc:oracle:thin:@localhost:1521:mydb");
        ds.setUsername("scott");
        ds.setPassword("tiger");

        MyStoredProcedure sproc = new MyStoredProcedure(ds);
        Map results = sproc.execute();
        printMap(results);
    }

    private class MyStoredProcedure extends StoredProcedure {
        
        private static final String SQL = "sysdate";

        public MyStoredProcedure(DataSource ds) {
            setDataSource(ds);
            setFunction(true);
            setSql(SQL);
            declareParameter(new SqlOutParameter("date", Types.DATE));
            compile();
        }

        public Map execute() {
            // the 'sysdate' sproc has no input parameters, so an empty Map is supplied...
            return execute(new HashMap());
        }
    }

    private static void printMap(Map results) {
        for (Iterator it = results.entrySet().iterator(); it.hasNext(); ) {
            System.out.println(it.next());  
        }
    }
}

Find below an example of a StoredProcedure that has two output parameters (in this case Oracle cursors).

import oracle.jdbc.driver.OracleTypes;
import org.springframework.jdbc.core.SqlOutParameter;
import org.springframework.jdbc.object.StoredProcedure;

import javax.sql.DataSource;
import java.util.HashMap;
import java.util.Map;

public class TitlesAndGenresStoredProcedure extends StoredProcedure {

    private static final String SPROC_NAME = "AllTitlesAndGenres";

    public TitlesAndGenresStoredProcedure(DataSource dataSource) {
        super(dataSource, SPROC_NAME);
        declareParameter(new SqlOutParameter("titles", OracleTypes.CURSOR, new TitleMapper()));
        declareParameter(new SqlOutParameter("genres", OracleTypes.CURSOR, new GenreMapper()));
        compile();
    }

    public Map execute() {
        // again, this sproc has no input parameters, so an empty Map is supplied...
        return super.execute(new HashMap());
    }
}

Notice how the overloaded variants of the declareParameter(..) method that have been used in the TitlesAndGenresStoredProcedure constructor are passed RowMapper implementation instances; this is a very convenient and powerful way to reuse existing functionality. (The code for the two RowMapper implementations is provided below in the interest of completeness.)

Firstly the TitleMapper class, which simply maps a ResultSet to a Title domain object for each row in the supplied ResultSet.

import com.foo.sprocs.domain.Title;
import org.springframework.jdbc.core.RowMapper;

import java.sql.ResultSet;
import java.sql.SQLException;

public final class TitleMapper implements RowMapper {
    
    public Object mapRow(ResultSet rs, int rowNum) throws SQLException {
        Title title = new Title();
        title.setId(rs.getLong("id"));
        title.setName(rs.getString("name"));
        return title;
    }
}

Secondly, the GenreMapper class, which again simply maps a ResultSet to a Genre domain object for each row in the supplied ResultSet.

import org.springframework.jdbc.core.RowMapper;

import java.sql.ResultSet;
import java.sql.SQLException;

import com.foo.domain.Genre;

public final class GenreMapper implements RowMapper {
    
    public Object mapRow(ResultSet rs, int rowNum) throws SQLException {
        return new Genre(rs.getString("name"));
    }
}

If one needs to pass parameters to a stored procedure (that is the stored procedure has been declared as having one or more input parameters in its definition in the RDBMS), one would code a strongly typed execute(..) method which would delegate to the superclass' (untyped) execute(Map parameters) (which has protected access); for example:

import oracle.jdbc.driver.OracleTypes;
import org.springframework.jdbc.core.SqlOutParameter;
import org.springframework.jdbc.object.StoredProcedure;

import javax.sql.DataSource;
import java.util.HashMap;
import java.util.Map;

public class TitlesAfterDateStoredProcedure extends StoredProcedure {

    private static final String SPROC_NAME = "TitlesAfterDate";
    private static final String CUTOFF_DATE_PARAM = "cutoffDate";

    public TitlesAfterDateStoredProcedure(DataSource dataSource) {
        super(dataSource, SPROC_NAME);
        declareParameter(new SqlParameter(CUTOFF_DATE_PARAM, Types.DATE);
        declareParameter(new SqlOutParameter("titles", OracleTypes.CURSOR, new TitleMapper()));
        compile();
    }

    public Map execute(Date cutoffDate) {
        Map inputs = new HashMap();
        inputs.put(CUTOFF_DATE_PARAM, cutoffDate);
        return super.execute(inputs);
    }
}

11.4.5. SqlFunction

The SqlFunction RDBMS operation class encapsulates an SQL "function" wrapper for a query that returns a single row of results. The default behavior is to return an int, but that can be overridden by using the methods with an extra return type parameter. This is similar to using the queryForXxx methods of the JdbcTemplate. The advantage with SqlFunction is that you don't have to create the JdbcTemplate, it is done behind the scenes.

This class is intended to use to call SQL functions that return a single result using a query like "select user()" or "select sysdate from dual". It is not intended for calling more complex stored functions or for using a CallableStatement to invoke a stored procedure or stored function. (Use the StoredProcedure or SqlCall classes for this type of processing).

SqlFunction is a concrete class, and there is typically no need to subclass it. Code using this package can create an object of this type, declaring SQL and parameters, and then invoke the appropriate run method repeatedly to execute the function. Here is an example of retrieving the count of rows from a table:

public int countRows() {
    SqlFunction sf = new SqlFunction(dataSource, "select count(*) from mytable");
    sf.compile();
    return sf.run();
}

Chapter 12. Object Relational Mapping (ORM) data access

12.1. Introduction

The Spring Framework provides integration with Hibernate, JDO, Oracle TopLink, iBATIS SQL Maps and JPA: in terms of resource management, DAO implementation support, and transaction strategies. For example for Hibernate, there is first-class support with lots of IoC convenience features, addressing many typical Hibernate integration issues. All of these support packages for O/R (Object Relational) mappers comply with Spring's generic transaction and DAO exception hierarchies. There are usually two integration styles: either using Spring's DAO 'templates' or coding DAOs against plain Hibernate/JDO/TopLink/etc APIs. In both cases, DAOs can be configured through Dependency Injection and participate in Spring's resource and transaction management.

Spring adds significant support when using the O/R mapping layer of your choice to create data access applications. First of all, you should know that once you started using Spring's support for O/R mapping, you don't have to go all the way. No matter to what extent, you're invited to review and leverage the Spring approach, before deciding to take the effort and risk of building a similar infrastructure in-house. Much of the O/R mapping support, no matter what technology you're using may be used in a library style, as everything is designed as a set of reusable JavaBeans. Usage inside a Spring IoC container does provide additional benefits in terms of ease of configuration and deployment; as such, most examples in this section show configuration inside a Spring container.

Some of the benefits of using the Spring Framework to create your ORM DAOs include:

  • Ease of testing. Spring's IoC approach makes it easy to swap the implementations and config locations of Hibernate SessionFactory instances, JDBC DataSource instances, transaction managers, and mapper object implementations (if needed). This makes it much easier to isolate and test each piece of persistence-related code in isolation.

  • Common data access exceptions. Spring can wrap exceptions from your O/R mapping tool of choice, converting them from proprietary (potentially checked) exceptions to a common runtime DataAccessException hierarchy. This allows you to handle most persistence exceptions, which are non-recoverable, only in the appropriate layers, without annoying boilerplate catches/throws, and exception declarations. You can still trap and handle exceptions anywhere you need to. Remember that JDBC exceptions (including DB specific dialects) are also converted to the same hierarchy, meaning that you can perform some operations with JDBC within a consistent programming model.

  • General resource management. Spring application contexts can handle the location and configuration of Hibernate SessionFactory instances, JDBC DataSource instances, iBATIS SQL Maps configuration objects, and other related resources. This makes these values easy to manage and change. Spring offers efficient, easy and safe handling of persistence resources. For example: related code using Hibernate generally needs to use the same Hibernate Session for efficiency and proper transaction handling. Spring makes it easy to transparently create and bind a Session to the current thread, either by using an explicit 'template' wrapper class at the Java code level or by exposing a current Session through the Hibernate SessionFactory (for DAOs based on plain Hibernate3 API). Thus Spring solves many of the issues that repeatedly arise from typical Hibernate usage, for any transaction environment (local or JTA).

  • Integrated transaction management. Spring allows you to wrap your O/R mapping code with either a declarative, AOP style method interceptor, or an explicit 'template' wrapper class at the Java code level. In either case, transaction semantics are handled for you, and proper transaction handling (rollback, etc) in case of exceptions is taken care of. As discussed below, you also get the benefit of being able to use and swap various transaction managers, without your Hibernate/JDO related code being affected: for example, between local transactions and JTA, with the same full services (such as declarative transactions) available in both scenarios. As an additional benefit, JDBC-related code can fully integrate transactionally with the code you use to do O/R mapping. This is useful for data access that's not suitable for O/R mapping, such as batch processing or streaming of BLOBs, which still needs to share common transactions with ORM operations.

The PetClinic sample in the Spring distribution offers alternative DAO implementations and application context configurations for JDBC, Hibernate, Oracle TopLink, and JPA. PetClinic can therefore serve as working sample app that illustrates the use of Hibernate, TopLink and JPA in a Spring web application. It also leverages declarative transaction demarcation with different transaction strategies.

The JPetStore sample illustrates the use of iBATIS SQL Maps in a Spring environment. It also features two web tier versions: one based on Spring Web MVC, one based on Struts.

Beyond the samples shipped with Spring, there is a variety of Spring-based O/R mapping samples provided by specific vendors: for example, the JDO implementations JPOX (http://www.jpox.org/) and Kodo (http://www.bea.com/kodo).

12.2. Hibernate

We will start with a coverage of Hibernate in a Spring environment, using it to demonstrate the approach that Spring takes towards integrating O/R mappers. This section will cover many issues in detail and show different variations of DAO implementations and transaction demarcations. Most of these patterns can be directly translated to all other supported O/R mapping tools. The following sections in this chapter will then cover the other O/R mappers, showing briefer examples there.

The following discussion focuses on Hibernate 3: this is the current major production ready version of Hibernate. Hibernate 2.x, which has been supported in Spring since its inception continues to be supported... it is just that the following examples all use the Hibernate 3 classes and configuration. All of this can (pretty much) be applied to Hibernate 2.x as-is, using the analogous Hibernate 2.x support package: org.springframework.orm.hibernate, mirroring org.springframework.orm.hibernate3 with analogous support classes for Hibernate 2.x. Furthermore, all references to the org.hibernate package need to be replaced with net.sf.hibernate, following the root package change in Hibernate 3. Simply adapt the package names (as used in the examples) accordingly.

12.2.1. Resource management

Typical business applications are often cluttered with repetitive resource management code. Many projects try to invent their own solutions for this issue, sometimes sacrificing proper handling of failures for programming convenience. Spring advocates strikingly simple solutions for proper resource handling, namely IoC via templating; for example infrastructure classes with callback interfaces, or applying AOP interceptors. The infrastructure cares for proper resource handling, and for appropriate conversion of specific API exceptions to an unchecked infrastructure exception hierarchy. Spring introduces a DAO exception hierarchy, applicable to any data access strategy. For direct JDBC, the JdbcTemplate class mentioned in a previous section cares for connection handling, and for proper conversion of SQLException to the DataAccessException hierarchy, including translation of database-specific SQL error codes to meaningful exception classes. It supports both JTA and JDBC transactions, via respective Spring transaction managers.

Spring also offers Hibernate and JDO support, consisting of a HibernateTemplate / JdoTemplate analogous to JdbcTemplate, a HibernateInterceptor / JdoInterceptor, and a Hibernate / JDO transaction manager. The major goal is to allow for clear application layering, with any data access and transaction technology, and for loose coupling of application objects. No more business service dependencies on the data access or transaction strategy, no more hard-coded resource lookups, no more hard-to-replace singletons, no more custom service registries. One simple and consistent approach to wiring up application objects, keeping them as reusable and free from container dependencies as possible. All the individual data access features are usable on their own but integrate nicely with Spring's application context concept, providing XML-based configuration and cross-referencing of plain JavaBean instances that don't need to be Spring-aware. In a typical Spring app, many important objects are JavaBeans: data access templates, data access objects (that use the templates), transaction managers, business services (that use the data access objects and transaction managers), web view resolvers, web controllers (that use the business services),and so on.

12.2.2. SessionFactory setup in a Spring container

To avoid tying application objects to hard-coded resource lookups, Spring allows you to define resources like a JDBC DataSource or a Hibernate SessionFactory as beans in an application context. Application objects that need to access resources just receive references to such pre-defined instances via bean references (the DAO definition in the next section illustrates this). The following excerpt from an XML application context definition shows how to set up a JDBC DataSource and a Hibernate SessionFactory on top of it:

<beans>

  <bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
    <property name="driverClassName" value="org.hsqldb.jdbcDriver"/>
    <property name="url" value="jdbc:hsqldb:hsql://localhost:9001"/>
    <property name="username" value="sa"/>
    <property name="password" value=""/>
  </bean>

  <bean id="mySessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
    <property name="dataSource" ref="myDataSource"/>
    <property name="mappingResources">
      <list>
        <value>product.hbm.xml</value>
      </list>
    </property>
    <property name="hibernateProperties">
      <value>
        hibernate.dialect=org.hibernate.dialect.HSQLDialect
      </value>
    </property>
  </bean>

</beans>

Note that switching from a local Jakarta Commons DBCP BasicDataSource to a JNDI-located DataSource (usually managed by an application server) is just a matter of configuration:

<beans>

  <bean id="myDataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
    <property name="jndiName" value="java:comp/env/jdbc/myds"/>
  </bean>

</beans>

You can also access a JNDI-located SessionFactory, using Spring's JndiObjectFactoryBean to retrieve and expose it. However, that is typically not common outside of an EJB context.

12.2.3. The HibernateTemplate

The basic programming model for templating looks as follows, for methods that can be part of any custom data access object or business service. There are no restrictions on the implementation of the surrounding object at all, it just needs to provide a Hibernate SessionFactory. It can get the latter from anywhere, but preferably as bean reference from a Spring IoC container - via a simple setSessionFactory(..) bean property setter. The following snippets show a DAO definition in a Spring container, referencing the above defined SessionFactory, and an example for a DAO method implementation.

<beans>

  <bean id="myProductDao" class="product.ProductDaoImpl">
    <property name="sessionFactory" ref="mySessionFactory"/>
  </bean>

</beans>
public class ProductDaoImpl implements ProductDao {

    private HibernateTemplate hibernateTemplate;

    public void setSessionFactory(SessionFactory sessionFactory) {
        this.hibernateTemplate = new HibernateTemplate(sessionFactory);
    }

    public Collection loadProductsByCategory(String category) throws DataAccessException {
    	return this.hibernateTemplate.find("from test.Product product where product.category=?", category);
    }
}

The HibernateTemplate class provides many methods that mirror the methods exposed on the Hibernate Session interface, in addition to a number of convenience methods such as the one shown above. If you need access to the Session to invoke methods that are not exposed on the HibernateTemplate, you can always drop down to a callback-based approach like so.

public class ProductDaoImpl implements ProductDao {

    private HibernateTemplate hibernateTemplate;

    public void setSessionFactory(SessionFactory sessionFactory) {
        this.hibernateTemplate = new HibernateTemplate(sessionFactory);
    }

    public Collection loadProductsByCategory(final String category) throws DataAccessException {
        return this.hibernateTemplate.execute(new HibernateCallback() {
            public Object doInHibernate(Session session) {
                Criteria criteria = session.createCriteria(Product.class);
                criteria.add(Expression.eq("category", category));
                criteria.setMaxResults(6);
                return criteria.list();
            }
        };
    }
}

A callback implementation effectively can be used for any Hibernate data access. HibernateTemplate will ensure that Session instances are properly opened and closed, and automatically participate in transactions. The template instances are thread-safe and reusable, they can thus be kept as instance variables of the surrounding class. For simple single step actions like a single find, load, saveOrUpdate, or delete call, HibernateTemplate offers alternative convenience methods that can replace such one line callback implementations. Furthermore, Spring provides a convenient HibernateDaoSupport base class that provides a setSessionFactory(..) method for receiving a SessionFactory, and getSessionFactory() and getHibernateTemplate()for use by subclasses. In combination, this allows for very simple DAO implementations for typical requirements:

public class ProductDaoImpl extends HibernateDaoSupport implements ProductDao {

    public Collection loadProductsByCategory(String category) throws DataAccessException {
        return this.getHibernateTemplate().find(
            "from test.Product product where product.category=?", category);
    }
}

12.2.4. Implementing Spring-based DAOs without callbacks

As alternative to using Spring's HibernateTemplate to implement DAOs, data access code can also be written in a more traditional fashion, without wrapping the Hibernate access code in a callback, while still respecting and participating in Spring's generic DataAccessException hierarchy. The HibernateDaoSupport base class offers methods to access the current transactional Session and to convert exceptions in such a scenario; similar methods are also available as static helpers on the SessionFactoryUtils class. Note that such code will usually pass 'false' as the value of the getSession(..) methods 'allowCreate' argument, to enforce running within a transaction (which avoids the need to close the returned Session, as its lifecycle is managed by the transaction).

public class HibernateProductDao extends HibernateDaoSupport implements ProductDao {

    public Collection loadProductsByCategory(String category) throws DataAccessException, MyException {
        Session session = getSession(false);
        try {
            Query query = session.createQuery("from test.Product product where product.category=?");
            query.setString(0, category);
            List result = query.list();
            if (result == null) {
                throw new MyException("No search results.");
            }
            return result;
        }
        catch (HibernateException ex) {
            throw convertHibernateAccessException(ex);
        }
    }
}

The advantage of such direct Hibernate access code is that it allows any checked application exception to be thrown within the data access code; contrast this to the HibernateTemplate class which is restricted to throwing only unchecked exceptions within the callback. Note that you can often defer the corresponding checks and the throwing of application exceptions to after the callback, which still allows working with HibernateTemplate. In general, the HibernateTemplate class' convenience methods are simpler and more convenient for many scenarios.

12.2.5. Implementing DAOs based on plain Hibernate3 API

Hibernate 3.0.1 introduced a feature called "contextual Sessions", where Hibernate itself manages one current Session per transaction. This is roughly equivalent to Spring's synchronization of one Hibernate Session per transaction. A corresponding DAO implementation looks like as follows, based on the plain Hibernate API:

public class ProductDaoImpl implements ProductDao {

    private SessionFactory sessionFactory;

    public void setSessionFactory(SessionFactory sessionFactory) {
        this.sessionFactory = sessionFactory;
    }

    public Collection loadProductsByCategory(String category) {
        return this.sessionFactory.getCurrentSession()
                .createQuery("from test.Product product where product.category=?")
                .setParameter(0, category)
                .list();
    }
}

This style is very similar to what you will find in the Hibernate reference documentation and examples, except for holding the SessionFactory in an instance variable. We strongly recommend such an instance-based setup over the old-school static HibernateUtil class from Hibernate's CaveatEmptor sample application. (In general, do not keep any resources in static variables unless absolutely necessary.)

The above DAO follows the Dependency Injection pattern: it fits nicely into a Spring IoC container, just like it would if coded against Spring's HibernateTemplate. Of course, such a DAO can also be set up in plain Java (for example, in unit tests): simply instantiate it and call setSessionFactory(..) with the desired factory reference. As a Spring bean definition, it would look as follows:

<beans>

  <bean id="myProductDao" class="product.ProductDaoImpl">
    <property name="sessionFactory" ref="mySessionFactory"/>
  </bean>

</beans>

The main advantage of this DAO style is that it depends on Hibernate API only; no import of any Spring class is required. This is of course appealing from a non-invasiveness perspective, and will no doubt feel more natural to Hibernate developers.

However, the DAO throws plain HibernateException (which is unchecked, so does not have to be declared or caught), which means that callers can only treat exceptions as generally fatal - unless they want to depend on Hibernate's own exception hierarchy. Catching specific causes such as an optimistic locking failure is not possible without tieing the caller to the implementation strategy. This tradeoff might be acceptable to applications that are strongly Hibernate-based and/or do not need any special exception treatment.

Fortunately, Spring's LocalSessionFactoryBean supports Hibernate's SessionFactory.getCurrentSession() method for any Spring transaction strategy, returning the current Spring-managed transactional Session even with HibernateTransactionManager. Of course, the standard behavior of that method remains: returning the current Session associated with the ongoing JTA transaction, if any (no matter whether driven by Spring's JtaTransactionManager, by EJB CMT, or by JTA).

In summary: DAOs can be implemented based on the plain Hibernate3 API, while still being able to participate in Spring-managed transactions.

12.2.6. Programmatic transaction demarcation

Transactions can be demarcated in a higher level of the application, on top of such lower-level data access services spanning any number of operations. There are no restrictions on the implementation of the surrounding business service here as well, it just needs a Spring PlatformTransactionManager. Again, the latter can come from anywhere, but preferably as bean reference via a setTransactionManager(..) method - just like the productDAO should be set via a setProductDao(..) method. The following snippets show a transaction manager and a business service definition in a Spring application context, and an example for a business method implementation.

<beans>

  <bean id="myTxManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager">
    <property name="sessionFactory" ref="mySessionFactory"/>
  </bean>

  <bean id="myProductService" class="product.ProductServiceImpl">
    <property name="transactionManager" ref="myTxManager"/>
    <property name="productDao" ref="myProductDao"/>
  </bean>

</beans>
public class ProductServiceImpl implements ProductService {

    private TransactionTemplate transactionTemplate;
    private ProductDao productDao;

    public void setTransactionManager(PlatformTransactionManager transactionManager) {
        this.transactionTemplate = new TransactionTemplate(transactionManager);
    }

    public void setProductDao(ProductDao productDao) {
        this.productDao = productDao;
    }

    public void increasePriceOfAllProductsInCategory(final String category) {
        this.transactionTemplate.execute(new TransactionCallbackWithoutResult() {

                public void doInTransactionWithoutResult(TransactionStatus status) {
                    List productsToChange = this.productDao.loadProductsByCategory(category);
                    // do the price increase...
                }
            }
        );
    }
}

12.2.7. Declarative transaction demarcation

Alternatively, one can use Spring's declarative transaction support, which essentially enables you to replace explicit transaction demarcation API calls in your Java code with an AOP transaction interceptor configured in a Spring container. This allows you to keep business services free of repetitive transaction demarcation code, and allows you to focus on adding business logic which is where the real value of your application lies. Furthermore, transaction semantics like propagation behavior and isolation level can be changed in a configuration file and do not affect the business service implementations.

<beans>

  <bean id="myTxManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager">
    <property name="sessionFactory" ref="mySessionFactory"/>
  </bean>

  <bean id="myProductService" class="org.springframework.aop.framework.ProxyFactoryBean">
    <property name="proxyInterfaces" value="product.ProductService"/>
    <property name="target">
        <bean class="product.DefaultProductService">
            <property name="productDao" ref="myProductDao"/>
        </bean>
    </property>
    <property name="interceptorNames">
      <list>
        <value>myTxInterceptor</value> <!-- the transaction interceptor (configured elsewhere) -->
      </list>
    </property>
  </bean>

</beans>
public class ProductServiceImpl implements ProductService {

    private ProductDao productDao;

    public void setProductDao(ProductDao productDao) {
        this.productDao = productDao;
    }

    // notice the absence of transaction demarcation code in this method
    // Spring's declarative transaction infrastructure will be demarcating transactions on your behalf 
    public void increasePriceOfAllProductsInCategory(final String category) {
        List productsToChange = this.productDao.loadProductsByCategory(category);
        // ...
    }
}

Spring's TransactionInterceptor allows any checked application exception to be thrown with the callback code, while TransactionTemplate is restricted to unchecked exceptions within the callback. TransactionTemplate will trigger a rollback in case of an unchecked application exception, or if the transaction has been marked rollback-only by the application (via TransactionStatus). TransactionInterceptor behaves the same way by default but allows configurable rollback policies per method.

The following higher level approach to declarative transactions doesn't use the ProxyFactoryBean, and as such may be easier to use if you have a large number of service objects that you wish to make transactional.

[Note]Note

You are strongly encouraged to read the section entitled Section 9.5, “Declarative transaction management” if you have not done so already prior to continuing.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:aop="http://www.springframework.org/schema/aop"
       xmlns:tx="http://www.springframework.org/schema/tx"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
       http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd
       http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd">

  <!-- SessionFactory, DataSource, etc. omitted -->

  <bean id="myTxManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager">
    <property name="sessionFactory" ref="mySessionFactory"/>
  </bean>
  
  <aop:config>
    <aop:pointcut id="productServiceMethods" expression="execution(* product.ProductService.*(..))"/>
    <aop:advisor advice-ref="txAdvice" pointcut-ref="productServiceMethods"/>
  </aop:config>

  <tx:advice id="txAdvice" transaction-manager="myTxManager">
    <tx:attributes>
      <tx:method name="increasePrice*" propagation="REQUIRED"/>
      <tx:method name="someOtherBusinessMethod" propagation="REQUIRES_NEW"/>
      <tx:method name="*" propagation="SUPPORTS" read-only="true"/>
    </tx:attributes>
  </tx:advice>

  <bean id="myProductService" class="product.SimpleProductService">
    <property name="productDao" ref="myProductDao"/>
  </bean>

</beans>

12.2.8. Transaction management strategies

Both TransactionTemplate and TransactionInterceptor delegate the actual transaction handling to a PlatformTransactionManager instance, which can be a HibernateTransactionManager (for a single Hibernate SessionFactory, using a ThreadLocal Session under the hood) or a JtaTransactionManager (delegating to the JTA subsystem of the container) for Hibernate applications. You could even use a custom PlatformTransactionManager implementation. So switching from native Hibernate transaction management to JTA, such as when facing distributed transaction requirements for certain deployments of your application, is just a matter of configuration. Simply replace the Hibernate transaction manager with Spring's JTA transaction implementation. Both transaction demarcation and data access code will work without changes, as they just use the generic transaction management APIs.

For distributed transactions across multiple Hibernate session factories, simply combine JtaTransactionManager as a transaction strategy with multiple LocalSessionFactoryBean definitions. Each of your DAOs then gets one specific SessionFactory reference passed into it's respective bean property. If all underlying JDBC data sources are transactional container ones, a business service can demarcate transactions across any number of DAOs and any number of session factories without special regard, as long as it is using JtaTransactionManager as the strategy.

<beans>

  <bean id="myDataSource1" class="org.springframework.jndi.JndiObjectFactoryBean">
    <property name="jndiName value="java:comp/env/jdbc/myds1"/>
  </bean>

  <bean id="myDataSource2" class="org.springframework.jndi.JndiObjectFactoryBean">
    <property name="jndiName" value="java:comp/env/jdbc/myds2"/>
  </bean>

  <bean id="mySessionFactory1" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
    <property name="dataSource" ref="myDataSource1"/>
    <property name="mappingResources">
      <list>
        <value>product.hbm.xml</value>
      </list>
    </property>
    <property name="hibernateProperties">
      <value>
        hibernate.dialect=org.hibernate.dialect.MySQLDialect
        hibernate.show_sql=true
      </value>
    </property>
  </bean>

  <bean id="mySessionFactory2" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
    <property name="dataSource" ref="myDataSource2"/>
    <property name="mappingResources">
      <list>
        <value>inventory.hbm.xml</value>
      </list>
    </property>
    <property name="hibernateProperties">
      <value>
        hibernate.dialect=org.hibernate.dialect.OracleDialect
      </value>
    </property>
  </bean>

  <bean id="myTxManager" class="org.springframework.transaction.jta.JtaTransactionManager"/>

  <bean id="myProductDao" class="product.ProductDaoImpl">
    <property name="sessionFactory" ref="mySessionFactory1"/>
  </bean>

  <bean id="myInventoryDao" class="product.InventoryDaoImpl">
    <property name="sessionFactory" ref="mySessionFactory2"/>
  </bean>

  <!-- this shows the Spring 1.x style of declarative transaction configuration -->
  <!-- it is totally supported, 100% legal in Spring 2.x, but see also above for the sleeker, Spring 2.0 style -->
  <bean id="myProductService"
      class="org.springframework.transaction.interceptor.TransactionProxyFactoryBean">
    <property name="transactionManager" ref="myTxManager"/>
    <property name="target">
      <bean class="product.ProductServiceImpl">
        <property name="productDao" ref="myProductDao"/>
        <property name="inventoryDao" ref="myInventoryDao"/>
      </bean>
    </property>
    <property name="transactionAttributes">
      <props>
        <prop key="increasePrice*">PROPAGATION_REQUIRED</prop>
        <prop key="someOtherBusinessMethod">PROPAGATION_REQUIRES_NEW</prop>
        <prop key="*">PROPAGATION_SUPPORTS,readOnly</prop>
      </props>
    </property>
  </bean>

</beans>

Both HibernateTransactionManager and JtaTransactionManager allow for proper JVM-level cache handling with Hibernate - without container-specific transaction manager lookup or JCA connector (as long as not using EJB to initiate transactions).

HibernateTransactionManager can export the JDBC Connection used by Hibernate to plain JDBC access code, for a specific DataSource. This allows for high-level transaction demarcation with mixed Hibernate/JDBC data access completely without JTA, as long as you are just accessing one database! HibernateTransactionManager will automatically expose the Hibernate transaction as JDBC transaction if the passed-in SessionFactory has been set up with a DataSource (through the "dataSource" property of the LocalSessionFactoryBean class). Alternatively, the DataSource that the transactions are supposed to be exposed for can also be specified explicitly, through the "dataSource" property of the HibernateTransactionManager class.

12.2.9. Container resources versus local resources

Spring's resource management allows for simple switching between a JNDI SessionFactory and a local one, without having to change a single line of application code. The decision as to whether to keep the resource definitions in the container or locally within the application, is mainly a matter of the transaction strategy being used. Compared to a Spring-defined local SessionFactory, a manually registered JNDI SessionFactory does not provide any benefits. Deploying a SessionFactory through Hibernate's JCA connector provides the added value of participating in the J2EE server's management infrastructure, but does not add actual value beyond that.

An important benefit of Spring's transaction support is that it isn't bound to a container at all. Configured to any other strategy than JTA, it will work in a standalone or test environment too. Especially for the typical case of single-database transactions, this is a very lightweight and powerful alternative to JTA. When using local EJB Stateless Session Beans to drive transactions, you depend both on an EJB container and JTA - even if you just access a single database anyway, and just use SLSBs for declarative transactions via CMT. The alternative of using JTA programmatically requires a J2EE environment as well. JTA does not just involve container dependencies in terms of JTA itself and of JNDI DataSource instances. For non-Spring JTA-driven Hibernate transactions, you have to use the Hibernate JCA connector, or extra Hibernate transaction code with the TransactionManagerLookup being configured for proper JVM-level caching.

Spring-driven transactions can work with a locally defined Hibernate SessionFactory nicely, just like with a local JDBC DataSource - if accessing a single database, of course. Therefore you just have to fall back to Spring's JTA transaction strategy when actually facing distributed transaction requirements. Note that a JCA connector needs container-specific deployment steps, and obviously JCA support in the first place. This is far more hassle than deploying a simple web app with local resource definitions and Spring-driven transactions. And you often need the Enterprise Edition of your container, as for example WebLogic Express does not provide JCA. A Spring application with local resources and transactions spanning one single database will work in any J2EE web container (without JTA, JCA, or EJB) - like Tomcat, Resin, or even plain Jetty. Additionally, such a middle tier can be reused in desktop applications or test suites easily.

All things considered: if you do not use EJB, stick with local SessionFactory setup and Spring's HibernateTransactionManager or JtaTransactionManager. You will get all of the benefits including proper transactional JVM-level caching and distributed transactions, without any container deployment hassle. JNDI registration of a Hibernate SessionFactory via the JCA connector really only adds value when used in conjunction with EJBs.

12.2.10. Spurious application server warnings when using Hibernate

In some JTA environments with very strict XADataSource implementations -- currently only some WebLogic and WebSphere versions -- when using Hibernate configured without any awareness of the JTA PlatformTransactionManager object for that environment, it is possible for spurious warning or exceptions to show up in the application server log. These warnings or exceptions will say something to the effect that the connection being accessed is no longer valid, or JDBC access is no longer valid, possibly because the transaction is no longer active. As an example, here is an actual exception from WebLogic:

java.sql.SQLException: The transaction is no longer active - status: 'Committed'.
   No further JDBC access is allowed within this transaction.

This warning is easy to resolve by simply making Hibernate aware of the JTA PlatformTransactionManager instance, to which it will also synchronize (along with Spring). This may be done in two ways:

  • If in your application context you are already directly obtaining the JTA PlatformTransactionManager object (presumably from JNDI via JndiObjectFactoryBean) and feeding it for example to Spring's JtaTransactionManager, then the easiest way is to simply specify a reference to this as the value of LocalSessionFactoryBean's jtaTransactionManager property. Spring will then make the object available to Hibernate.

  • More likely you do not already have the JTA PlatformTransactionManager instance (since Spring's JtaTransactionManager can find it itself) so you need to instead configure Hibernate to also look it up directly. This is done by configuring an AppServer specific TransactionManagerLookup class in the Hibernate configuration, as described in the Hibernate manual.

It is not necessary to read any more for proper usage, but the full sequence of events with and without Hibernate being aware of the JTA PlatformTransactionManager will now be described.

When Hibernate is not configured with any awareness of the JTA PlatformTransactionManager, the sequence of events when a JTA transaction commits is as follows:

  • JTA transaction commits

  • Spring's JtaTransactionManager is synchronized to the JTA transaction, so it is called back via an afterCompletion callback by the JTA transaction manager.

  • Among other activities, this can trigger a callback by Spring to Hibernate, via Hibernate's afterTransactionCompletion callback (used to clear the Hibernate cache), followed by an explicit close() call on the Hibernate Session, which results in Hibernate trying to close() the JDBC Connection.

  • In some environments, this Connection.close() call then triggers the warning or error, as the application server no longer considers the Connection usable at all, since the transaction has already been committed.

When Hibernate is configured with awareness of the JTA PlatformTransactionManager, the sequence of events when a JTA transaction commits is instead as follows:

  • JTA transaction is ready to commit

  • Spring's JtaTransactionManager is synchronized to the JTA transaction, so it is called back via a beforeCompletion callback by the JTA transaction manager.

  • Spring is aware that Hibernate itself is synchronized to the JTA transaction, and behaves differently than in the previous scenario. Assuming the Hibernate Session needs to be closed at all, Spring will close it now.

  • JTA Transaction commits

  • Hibernate is synchronized to the JTA transaction, so it is called back via an afterCompletion callback by the JTA transaction manager, and can properly clear its cache.

12.3. JDO

Spring supports the standard JDO 1.0/2.0 API as data access strategy, following the same style as the Hibernate support. The corresponding integration classes reside in the org.springframework.orm.jdo package.

12.3.1. PersistenceManagerFactory setup

Spring provides a LocalPersistenceManagerFactoryBean class that allows for defining a local JDO PersistenceManagerFactory within a Spring application context:

<beans>

  <bean id="myPmf" class="org.springframework.orm.jdo.LocalPersistenceManagerFactoryBean">
    <property name="configLocation" value="classpath:kodo.properties"/>
  </bean>

</beans>

Alternatively, a PersistenceManagerFactory can also be set up through direct instantiation of a PersistenceManagerFactory implementation class. A JDO PersistenceManagerFactory implementation class is supposed to follow the JavaBeans pattern, just like a JDBC DataSource implementation class, which is a natural fit for a Spring bean definition. This setup style usually supports a Spring-defined JDBC DataSource, passed into the "connectionFactory" property. For example, for the open source JDO implementation JPOX (http://www.jpox.org):

<beans>

 <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
   <property name="driverClassName" value="${jdbc.driverClassName}"/>
   <property name="url" value="${jdbc.url}"/>
   <property name="username" value="${jdbc.username}"/>
   <property name="password" value="${jdbc.password}"/>
 </bean>

 <bean id="myPmf" class="org.jpox.PersistenceManagerFactoryImpl" destroy-method="close">
   <property name="connectionFactory" ref="dataSource"/>
   <property name="nontransactionalRead" value="true"/>
 </bean>

</beans>

A JDO PersistenceManagerFactory can also be set up in the JNDI environment of a J2EE application server, usually through the JCA connector provided by the particular JDO implementation. Spring's standard JndiObjectFactoryBean can be used to retrieve and expose such a PersistenceManagerFactory. However, outside an EJB context, there is often no compelling benefit in holding the PersistenceManagerFactory in JNDI: only choose such setup for a good reason. See "container resources versus local resources" in the Hibernate section for a discussion; the arguments there apply to JDO as well.

12.3.2. JdoTemplate and JdoDaoSupport

Each JDO-based DAO will then receive the PersistenceManagerFactory through dependency injection. Such a DAO could be coded against plain JDO API, working with the given PersistenceManagerFactory, but will usually rather be used with the Spring Framework's JdoTemplate:

<beans>
  
  <bean id="myProductDao" class="product.ProductDaoImpl">
    <property name="persistenceManagerFactory" ref="myPmf"/>
  </bean>
  
</beans>
public class ProductDaoImpl implements ProductDao {
  
    private JdoTemplate jdoTemplate;

    public void setPersistenceManagerFactory(PersistenceManagerFactory pmf) {
        this.jdoTemplate = new JdoTemplate(pmf);
    }

    public Collection loadProductsByCategory(final String category) throws DataAccessException {
        return (Collection) this.jdoTemplate.execute(new JdoCallback() {
            public Object doInJdo(PersistenceManager pm) throws JDOException {
                Query query = pm.newQuery(Product.class, "category = pCategory");
                query.declareParameters("String pCategory"); 
                List result = query.execute(category);
                // do some further stuff with the result list
                return result;
            }
        });
    }
}

A callback implementation can effectively be used for any JDO data access. JdoTemplate will ensure that PersistenceManagers are properly opened and closed, and automatically participate in transactions. The template instances are thread-safe and reusable, they can thus be kept as instance variables of the surrounding class. For simple single-step actions such as a single find, load, makePersistent, or delete call, JdoTemplate offers alternative convenience methods that can replace such one line callback implementations. Furthermore, Spring provides a convenient JdoDaoSupport base class that provides a setPersistenceManagerFactory(..) method for receiving a PersistenceManagerFactory, and getPersistenceManagerFactory() and getJdoTemplate() for use by subclasses. In combination, this allows for very simple DAO implementations for typical requirements:

public class ProductDaoImpl extends JdoDaoSupport implements ProductDao {
  
    public Collection loadProductsByCategory(String category) throws DataAccessException {
        return getJdoTemplate().find(
            Product.class, "category = pCategory", "String category", new Object[] {category});
    }
}

As alternative to working with Spring's JdoTemplate, you can also code Spring-based DAOs at the JDO API level, explicitly opening and closing a PersistenceManager. As elaborated in the corresponding Hibernate section, the main advantage of this approach is that your data access code is able to throw checked exceptions. JdoDaoSupport offers a variety of support methods for this scenario, for fetching and releasing a transactional PersistenceManager as well as for converting exceptions.

12.3.3. Implementing DAOs based on the plain JDO API

DAOs can also be written against plain JDO API, without any Spring dependencies, directly using an injected PersistenceManagerFactory. A corresponding DAO implementation looks like as follows:

public class ProductDaoImpl implements ProductDao {

    private PersistenceManagerFactory persistenceManagerFactory;

    public void setPersistenceManagerFactory(PersistenceManagerFactory pmf) {
        this.persistenceManagerFactory = pmf;
    }

    public Collection loadProductsByCategory(String category) {
        PersistenceManager pm = this.persistenceManagerFactory.getPersistenceManager();
        try {
            Query query = pm.newQuery(Product.class, "category = pCategory");
            query.declareParameters("String pCategory"); 
            return query.execute(category);
        }
        finally {
          pm.close();
        }
    }
}

As the above DAO still follows the Dependency Injection pattern, it still fits nicely into a Spring container, just like it would if coded against Spring's JdoTemplate:

<beans>

  <bean id="myProductDao" class="product.ProductDaoImpl">
    <property name="persistenceManagerFactory" ref="myPmf"/>
  </bean>

</beans>

The main issue with such DAOs is that they always get a new PersistenceManager from the factory. To still access a Spring-managed transactional PersistenceManager, consider defining a TransactionAwarePersistenceManagerFactoryProxy (as included in Spring) in front of your target PersistenceManagerFactory, passing the proxy into your DAOs.

<beans>

  <bean id="myPmfProxy"
      class="org.springframework.orm.jdo.TransactionAwarePersistenceManagerFactoryProxy">
    <property name="targetPersistenceManagerFactory" ref="myPmf"/>
  </bean>

  <bean id="myProductDao" class="product.ProductDaoImpl">
    <property name="persistenceManagerFactory" ref="myPmfProxy"/>
  </bean>

</beans>

Your data access code will then receive a transactional PersistenceManager (if any) from the PersistenceManagerFactory.getPersistenceManager() method that it calls. The latter method call goes through the proxy, which will first check for a current transactional PersistenceManager before getting a new one from the factory. close() calls on the PersistenceManager will be ignored in case of a transactional PersistenceManager.

If your data access code will always run within an active transaction (or at least within active transaction synchronization), it is safe to omit the PersistenceManager.close() call and thus the entire finally block, which you might prefer to keep your DAO implementations concise:

public class ProductDaoImpl implements ProductDao {

    private PersistenceManagerFactory persistenceManagerFactory;

    public void setPersistenceManagerFactory(PersistenceManagerFactory pmf) {
        this.persistenceManagerFactory = pmf;
    }

    public Collection loadProductsByCategory(String category) {
        PersistenceManager pm = this.persistenceManagerFactory.getPersistenceManager();
        Query query = pm.newQuery(Product.class, "category = pCategory");
        query.declareParameters("String pCategory"); 
        return query.execute(category);
    }
}

With such DAOs that rely on active transactions, it is recommended to enforce active transactions through turning TransactionAwarePersistenceManagerFactoryProxy's "allowCreate" flag off:

<beans>

  <bean id="myPmfProxy"
      class="org.springframework.orm.jdo.TransactionAwarePersistenceManagerFactoryProxy">
    <property name="targetPersistenceManagerFactory" ref="myPmf"/>
    <property name="allowCreate" value="false"/>
  </bean>

  <bean id="myProductDao" class="product.ProductDaoImpl">
    <property name="persistenceManagerFactory" ref="myPmfProxy"/>
  </bean>

</beans>

The main advantage of this DAO style is that it depends on JDO API only; no import of any Spring class is required. This is of course appealing from a non-invasiveness perspective, and might feel more natural to JDO developers.

However, the DAO throws plain JDOException (which is unchecked, so does not have to be declared or caught), which means that callers can only treat exceptions as generally fatal - unless they want to depend on JDO's own exception structure. Catching specific causes such as an optimistic locking failure is not possible without tying the caller to the implementation strategy. This tradeoff might be acceptable to applications that are strongly JDO-based and/or do not need any special exception treatment.

In summary: DAOs can be implemented based on plain JDO API, while still being able to participate in Spring-managed transactions. This might in particular appeal to people already familiar with JDO, feeling more natural to them. However, such DAOs will throw plain JDOException; conversion to Spring's DataAccessException would have to happen explicitly (if desired).

12.3.4. Transaction management

To execute service operations within transactions, you can use Spring's common declarative transaction facilities. For example:

<?xml version="1.0" encoding="UTF-8"?>
<beans
        xmlns="http://www.springframework.org/schema/beans"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xmlns:aop="http://www.springframework.org/schema/aop"
        xmlns:tx="http://www.springframework.org/schema/tx"
        xsi:schemaLocation="
   http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
   http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd
   http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd">

  <bean id="myTxManager" class="org.springframework.orm.jdo.JdoTransactionManager">
    <property name="persistenceManagerFactory" ref="myPmf"/>
  </bean>

  <bean id="myProductService" class="product.ProductServiceImpl">
    <property name="productDao" ref="myProductDao"/>
  </bean>

  <tx:advice id="txAdvice" transaction-manager="txManager">
    <tx:attributes>
      <tx:method name="increasePrice*" propagation="REQUIRED"/>
      <tx:method name="someOtherBusinessMethod" propagation="REQUIRES_NEW"/>
      <tx:method name="*" propagation="SUPPORTS" read-only="true"/>
    </tx:attributes>
  </tx:advice>

  <aop:config>
    <aop:pointcut id="productServiceMethods" expression="execution(* product.ProductService.*(..))"/>
    <aop:advisor advice-ref="txAdvice" pointcut-ref="productServiceMethods"/>
  </aop:config>

</beans>

Note that JDO requires an active transaction when modifying a persistent object. There is no concept like a non-transactional flush in JDO, in contrast to Hibernate. For this reason, the chosen JDO implementation needs to be set up for a specific environment: in particular, it needs to be explicitly set up for JTA synchronization, to detect an active JTA transaction itself. This is not necessary for local transactions as performed by Spring's JdoTransactionManager, but it is necessary for participating in JTA transactions (whether driven by Spring's JtaTransactionManager or by EJB CMT / plain JTA).

JdoTransactionManager is capable of exposing a JDO transaction to JDBC access code that accesses the same JDBC DataSource, provided that the registered JdoDialect supports retrieval of the underlying JDBC Connection. This is by default the case for JDBC-based JDO 2.0 implementations; for JDO 1.0 implementations, a custom JdoDialect needs to be used. See next section for details on the JdoDialect mechanism.

12.3.5. JdoDialect

As an advanced feature, both JdoTemplate and interfacename support a custom JdoDialect, to be passed into the "jdoDialect" bean property. In such a scenario, the DAOs won't receive a PersistenceManagerFactory reference but rather a full JdoTemplate instance instead (for example, passed into JdoDaoSupport's "jdoTemplate" property). A JdoDialect implementation can enable some advanced features supported by Spring, usually in a vendor-specific manner:

  • applying specific transaction semantics (such as custom isolation level or transaction timeout)

  • retrieving the transactional JDBC Connection (for exposure to JDBC-based DAOs)

  • applying query timeouts (automatically calculated from Spring-managed transaction timeout)

  • eagerly flushing a PersistenceManager (to make transactional changes visible to JDBC-based data access code)

  • advanced translation of JDOExceptions to Spring DataAccessExceptions

This is particularly valuable for JDO 1.0 implementations, where none of those features are covered by the standard API. On JDO 2.0, most of those features are supported in a standard manner: Hence, Spring's DefaultJdoDialect uses the corresponding JDO 2.0 API methods by default (as of Spring 1.2). For special transaction semantics and for advanced translation of exception, it is still valuable to derive vendor-specific JdoDialect subclasses.

See the JdoDialect Javadoc for more details on its operations and how they are used within Spring's JDO support.

12.4. Oracle TopLink

Since Spring 1.2, Spring supports Oracle TopLink (http://www.oracle.com/technology/products/ias/toplink) as data access strategy, following the same style as the Hibernate support. Both TopLink 9.0.4 (the production version as of Spring 1.2) and 10.1.3 (still in beta as of Spring 1.2) are supported. The corresponding integration classes reside in the org.springframework.orm.toplink package.

Spring's TopLink support has been co-developed with the Oracle TopLink team. Many thanks to the TopLink team, in particular to Jim Clark who helped to clarify details in all areas!

12.4.1. SessionFactory abstraction

TopLink itself does not ship with a SessionFactory abstraction. Instead, multi-threaded access is based on the concept of a central ServerSession, which in turn is able to spawn ClientSession instances for single-threaded usage. For flexible setup options, Spring defines a SessionFactory abstraction for TopLink, enabling to switch between different Session creation strategies.

As a one-stop shop, Spring provides a LocalSessionFactoryBean class that allows for defining a TopLink SessionFactory with bean-style configuration. It needs to be configured with the location of the TopLink session configuration file, and usually also receives a Spring-managed JDBC DataSource to use.

<beans>

  <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
    <property name="driverClassName" value="${jdbc.driverClassName}"/>
    <property name="url" value="${jdbc.url}"/>
    <property name="username" value="${jdbc.username}"/>
    <property name="password" value="${jdbc.password}"/>
  </bean>

  <bean id="mySessionFactory" class="org.springframework.orm.toplink.LocalSessionFactoryBean">
    <property name="configLocation" value="toplink-sessions.xml"/>
    <property name="dataSource" ref="dataSource"/>
  </bean>
    
</beans>
<toplink-configuration>

  <session>
    <name>Session</name>
    <project-xml>toplink-mappings.xml</project-xml>
    <session-type>
      <server-session/>
    </session-type>
    <enable-logging>true</enable-logging>
    <logging-options/>
  </session>

</toplink-configuration>

Usually, LocalSessionFactoryBean will hold a multi-threaded TopLink ServerSession underneath and create appropriate client Sessions for it: either a plain Session (typical), a managed ClientSession, or a transaction-aware Session (the latter are mainly used internally by Spring's TopLink support). It might also hold a single-threaded TopLink DatabaseSession; this is rather unusual, though.

12.4.2. TopLinkTemplate and TopLinkDaoSupport

Each TopLink-based DAO will then receive the SessionFactory through dependency injection, i.e. through a bean property setter or through a constructor argument. Such a DAO could be coded against plain TopLink API, fetching a Session from the given SessionFactory, but will usually rather be used with Spring's TopLinkTemplate:

<beans>
  
  <bean id="myProductDao" class="product.ProductDaoImpl">
    <property name="sessionFactory" ref="mySessionFactory"/>
  </bean>
  
</beans>
public class TopLinkProductDao implements ProductDao {
  
    private TopLinkTemplate tlTemplate;

    public void setSessionFactory(SessionFactory sessionFactory) {
        this.tlTemplate = new TopLinkTemplate(sessionFactory);
    }

    public Collection loadProductsByCategory(final String category) throws DataAccessException {
        return (Collection) this.tlTemplate.execute(new TopLinkCallback() {
            public Object doInTopLink(Session session) throws TopLinkException {
                ReadAllQuery findOwnersQuery = new ReadAllQuery(Product.class);
                findOwnersQuery.addArgument("Category");
                ExpressionBuilder builder = this.findOwnersQuery.getExpressionBuilder();
                findOwnersQuery.setSelectionCriteria(
                    builder.get("category").like(builder.getParameter("Category")));

                Vector args = new Vector();
                args.add(category);
                List result = session.executeQuery(findOwnersQuery, args);
                // do some further stuff with the result list
                return result;
            }
        });
    }
}

A callback implementation can effectively be used for any TopLink data access. TopLinkTemplate will ensure that Sessions are properly opened and closed, and automatically participate in transactions. The template instances are thread-safe and reusable, they can thus be kept as instance variables of the surrounding class. For simple single-step actions such as a single executeQuery, readAll, readById, or merge call, JdoTemplate offers alternative convenience methods that can replace such one line callback implementations. Furthermore, Spring provides a convenient TopLinkDaoSupport base class that provides a setSessionFactory(..) method for receiving a SessionFactory, and getSessionFactory() and getTopLinkTemplate() for use by subclasses. In combination, this allows for simple DAO implementations for typical requirements:

public class ProductDaoImpl extends TopLinkDaoSupport implements ProductDao {
  
    public Collection loadProductsByCategory(String category) throws DataAccessException {
        ReadAllQuery findOwnersQuery = new ReadAllQuery(Product.class);
        findOwnersQuery.addArgument("Category");
        ExpressionBuilder builder = this.findOwnersQuery.getExpressionBuilder();
        findOwnersQuery.setSelectionCriteria(
            builder.get("category").like(builder.getParameter("Category")));

        return getTopLinkTemplate().executeQuery(findOwnersQuery, new Object[] {category});
    }
}

Side note: TopLink query objects are thread-safe and can be cached within the DAO, i.e. created on startup and kept in instance variables.

As alternative to working with Spring's TopLinkTemplate, you can also code your TopLink data access based on the raw TopLink API, explicitly opening and closing a Session. As elaborated in the corresponding Hibernate section, the main advantage of this approach is that your data access code is able to throw checked exceptions. TopLinkDaoSupport offers a variety of support methods for this scenario, for fetching and releasing a transactional Session as well as for converting exceptions.

12.4.3. Implementing DAOs based on plain TopLink API

DAOs can also be written against plain TopLink API, without any Spring dependencies, directly using an injected TopLink Session. The latter will usually be based on a SessionFactory defined by a LocalSessionFactoryBean, exposed for bean references of type Session through Spring's TransactionAwareSessionAdapter.

The getActiveSession() method defined on TopLink's Session interface will return the current transactional Session in such a scenario. If there is no active transaction, it will return the shared TopLink ServerSession as-is, which is only supposed to be used directly for read-only access. There is also an analogous getActiveUnitOfWork() method, returning the TopLink UnitOfWork associated with the current transaction, if any (returning null else).

A corresponding DAO implementation looks like as follows:

public class ProductDaoImpl implements ProductDao {

    private Session session;

    public void setSession(Session session) {
        this.session = session;
    }

    public Collection loadProductsByCategory(String category) {
        ReadAllQuery findOwnersQuery = new ReadAllQuery(Product.class);
        findOwnersQuery.addArgument("Category");
        ExpressionBuilder builder = this.findOwnersQuery.getExpressionBuilder();
        findOwnersQuery.setSelectionCriteria(
            builder.get("category").like(builder.getParameter("Category")));

        Vector args = new Vector();
        args.add(category);
        return session.getActiveSession().executeQuery(findOwnersQuery, args);
    }
}

As the above DAO still follows the Dependency Injection pattern, it still fits nicely into a Spring application context, analogous to like it would if coded against Spring's TopLinkTemplate. Spring's TransactionAwareSessionAdapter is used to expose a bean reference of type Session, to be passed into the DAO:

<beans>

  <bean id="mySessionAdapter"
      class="org.springframework.orm.toplink.support.TransactionAwareSessionAdapter">
    <property name="sessionFactory" ref="mySessionFactory"/>
  </bean>

  <bean id="myProductDao" class="product.ProductDaoImpl">
    <property name="session" ref="mySessionAdapter"/>
  </bean>

</beans>

The main advantage of this DAO style is that it depends on TopLink API only; no import of any Spring class is required. This is of course appealing from a non-invasiveness perspective, and might feel more natural to TopLink developers.

However, the DAO throws plain TopLinkException (which is unchecked, so does not have to be declared or caught), which means that callers can only treat exceptions as generally fatal - unless they want to depend on TopLink's own exception structure. Catching specific causes such as an optimistic locking failure is not possible without tying the caller to the implementation strategy. This tradeoff might be acceptable to applications that are strongly TopLink-based and/or do not need any special exception treatment.

A further disadvantage of that DAO style is that TopLink's standard getActiveSession() feature just works within JTA transactions. It does not work with any other transaction strategy out-of-the-box, in particular not with local TopLink transactions.

Fortunately, Spring's TransactionAwareSessionAdapter exposes a corresponding proxy for the TopLink ServerSession which supports TopLink's Session.getActiveSession() and Session.getActiveUnitOfWork() methods for any Spring transaction strategy, returning the current Spring-managed transactional Session even with TopLinkTransactionManager. Of course, the standard behavior of that method remains: returning the current Session associated with the ongoing JTA transaction, if any (no matter whether driven by Spring's JtaTransactionManager, by EJB CMT, or by plain JTA).

In summary: DAOs can be implemented based on plain TopLink API, while still being able to participate in Spring-managed transactions. This might in particular appeal to people already familiar with TopLink, feeling more natural to them. However, such DAOs will throw plain TopLinkException; conversion to Spring's DataAccessException would have to happen explicitly (if desired).

12.4.4. Transaction management

To execute service operations within transactions, you can use Spring's common declarative transaction facilities. For example:

<?xml version="1.0" encoding="UTF-8"?>
<beans
        xmlns="http://www.springframework.org/schema/beans"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xmlns:aop="http://www.springframework.org/schema/aop"
        xmlns:tx="http://www.springframework.org/schema/tx"
        xsi:schemaLocation="
   http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
   http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd
   http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd">

  <bean id="myTxManager" class="org.springframework.orm.toplink.TopLinkTransactionManager">
    <property name="sessionFactory" ref="mySessionFactory"/>
  </bean>

  <bean id="myProductService" class="product.ProductServiceImpl">
    <property name="productDao" ref="myProductDao"/>
  </bean>
  
  <aop:config>
    <aop:pointcut id="productServiceMethods" expression="execution(* product.ProductService.*(..))"/>
    <aop:advisor advice-ref="txAdvice" pointcut-ref="productServiceMethods"/>
  </aop:config>

  <tx:advice id="txAdvice" transaction-manager="myTxManager">
    <tx:attributes>
      <tx:method name="increasePrice*" propagation="REQUIRED"/>
      <tx:method name="someOtherBusinessMethod" propagation="REQUIRES_NEW"/>
      <tx:method name="*" propagation="SUPPORTS" read-only="true"/>
    </tx:attributes>
  </tx:advice>

</beans>

Note that TopLink requires an active UnitOfWork for modifying a persistent object. (You should never modify objects returned by a plain TopLink Session - those are usually read-only objects, directly taken from the second-level cache!) There is no concept like a non-transactional flush in TopLink, in contrast to Hibernate. For this reason, TopLink needs to be set up for a specific environment: in particular, it needs to be explicitly set up for JTA synchronization, to detect an active JTA transaction itself and expose a corresponding active Session and UnitOfWork. This is not necessary for local transactions as performed by Spring's TopLinkTransactionManager, but it is necessary for participating in JTA transactions (whether driven by Spring's JtaTransactionManager or by EJB CMT / plain JTA).

Within your TopLink-based DAO code, use the Session.getActiveUnitOfWork() method to access the current UnitOfWork and perform write operations through it. This will only work within an active transaction (both within Spring-managed transactions and plain JTA transactions). For special needs, you can also acquire separate UnitOfWork instances that won't participate in the current transaction; this is hardly needed, though.

TopLinkTransactionManager is capable of exposing a TopLink transaction to JDBC access code that accesses the same JDBC DataSource, provided that TopLink works with JDBC in the backend and is thus able to expose the underlying JDBC Connection. The DataSource to expose the transactions for needs to be specified explicitly; it won't be autodetected.

12.5. iBATIS SQL Maps

The iBATIS support in the Spring Framework much resembles the JDBC / Hibernate support in that it supports the same template style programming and just as with JDBC or Hibernate, the iBATIS support works with Spring's exception hierarchy and let's you enjoy the all IoC features Spring has.

Transaction management can be handled through Spring's standard facilities. There are no special transaction strategies for iBATIS, as there is no special transactional resource involved other than a JDBC Connection. Hence, Spring's standard JDBC DataSourceTransactionManager or JtaTransactionManager are perfectly sufficient.

[Note]Note

Spring does actually support both iBatis 1.x and 2.x. However, only support for iBatis 2.x is actually shipped with the core Spring distribution. The iBatis 1.x support classes were moved to the Spring Modules project as of Spring 2.0, and you are directed there for documentation.

12.5.1. Setting up the SqlMapClient

If we want to map the previous Account class with iBATIS 2.x we need to create the following SQL map 'Account.xml':

<sqlMap namespace="Account">

  <resultMap id="result" class="examples.Account">
    <result property="name" column="NAME" columnIndex="1"/>
    <result property="email" column="EMAIL" columnIndex="2"/>
  </resultMap>

  <select id="getAccountByEmail" resultMap="result">
    select ACCOUNT.NAME, ACCOUNT.EMAIL
    from ACCOUNT
    where ACCOUNT.EMAIL = #value#
  </select>

  <insert id="insertAccount">
    insert into ACCOUNT (NAME, EMAIL) values (#name#, #email#)
  </insert>

</sqlMap>

The configuration file for iBATIS 2 looks like this:

<sqlMapConfig>

  <sqlMap resource="example/Account.xml"/>

</sqlMapConfig>

Remember that iBATIS loads resources from the class path, so be sure to add the 'Account.xml' file to the class path.

We can use the SqlMapClientFactoryBean in the Spring container. Note that with iBATIS SQL Maps 2.x, the JDBC DataSource is usually specified on the SqlMapClientFactoryBean, which enables lazy loading.

<beans>

  <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
    <property name="driverClassName" value="${jdbc.driverClassName}"/>
    <property name="url" value="${jdbc.url}"/>
    <property name="username" value="${jdbc.username}"/>
    <property name="password" value="${jdbc.password}"/>
  </bean>

  <bean id="sqlMapClient" class="org.springframework.orm.ibatis.SqlMapClientFactoryBean">
    <property name="configLocation" value="WEB-INF/sqlmap-config.xml"/>
    <property name="dataSource" ref="dataSource"/>
  </bean>

</beans>

12.5.2. Using SqlMapClientTemplate and SqlMapClientDaoSupport

The SqlMapClientDaoSupport class offers a supporting class similar to the SqlMapDaoSupport. We extend it to implement our DAO:

public class SqlMapAccountDao extends SqlMapClientDaoSupport implements AccountDao {

    public Account getAccount(String email) throws DataAccessException {
        return (Account) getSqlMapClientTemplate().queryForObject("getAccountByEmail", email);
    }

    public void insertAccount(Account account) throws DataAccessException {
        getSqlMapClientTemplate().update("insertAccount", account);
    }
}

In the DAO, we use the pre-configured SqlMapClientTemplate to execute the queries, after setting up the SqlMapAccountDao in the application context and wiring it with our SqlMapClient instance:

<beans>

  <bean id="accountDao" class="example.SqlMapAccountDao">
    <property name="sqlMapClient" ref="sqlMapClient"/>
  </bean>

</beans>

Note that a SqlMapTemplate instance could also be created manually, passing in the SqlMapClient as constructor argument. The SqlMapClientDaoSupport base class simply pre-initializes a SqlMapClientTemplate instance for us.

The SqlMapClientTemplate also offers a generic execute method, taking a custom SqlMapClientCallback implementation as argument. This can, for example, be used for batching:

public class SqlMapAccountDao extends SqlMapClientDaoSupport implements AccountDao {

    public void insertAccount(Account account) throws DataAccessException {
        getSqlMapClientTemplate().execute(new SqlMapClientCallback() {
            public Object doInSqlMapClient(SqlMapExecutor executor) throws SQLException {
                executor.startBatch();
                executor.update("insertAccount", account);
                executor.update("insertAddress", account.getAddress());
                executor.executeBatch();
            }
        });
    }
}

In general, any combination of operations offered by the native SqlMapExecutor API can be used in such a callback. Any SQLException thrown will automatically get converted to Spring's generic DataAccessException hierarchy.

12.5.3. Implementing DAOs based on plain iBATIS API

DAOs can also be written against plain iBATIS API, without any Spring dependencies, directly using an injected SqlMapClient. A corresponding DAO implementation looks like as follows:

public class SqlMapAccountDao implements AccountDao {
        
    private SqlMapClient sqlMapClient;
    
    public void setSqlMapClient(SqlMapClient sqlMapClient) {
        this.sqlMapClient = sqlMapClient;
    }

    public Account getAccount(String email) {
        try {
            return (Account) this.sqlMapClient.queryForObject("getAccountByEmail", email);
        }
        catch (SQLException ex) {
            throw new MyDaoException(ex);
        }
    }

    public void insertAccount(Account account) throws DataAccessException {
        try {
            this.sqlMapClient.update("insertAccount", account);
        }
        catch (SQLException ex) {
            throw new MyDaoException(ex);
        }
    }
}

In such a scenario, the SQLException thrown by the iBATIS API needs to be handled in a custom fashion: usually, wrapping it in your own application-specific DAO exception. Wiring in the application context would still look like before, due to the fact that the plain iBATIS-based DAO still follows the Dependency Injection pattern:

<beans>

  <bean id="accountDao" class="example.SqlMapAccountDao">
    <property name="sqlMapClient" ref="sqlMapClient"/>
  </bean>

</beans>

12.6. JPA

Spring JPA (available under the org.springframework.orm.jpa package) offers comprehensive support for the Java Persistence API in a similar manner to the integration with Hibernate or JDO, while being aware of the underlying implementation in order to provide additional features.

12.6.1. JPA setup in a Spring environment

Spring JPA offers two ways of setting up JPA EntityManagerFactory:

12.6.1.1. LocalEntityManagerFactoryBean

The LocalEntityManagerFactoryBean creates an EntityManager suitable for environments which solely use JPA for data access. The factory bean will use the JPA PersistenceProvider autodetection mechanism and in most cases, requires only the persistence unit name:

<beans>

   <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalEntityManagerFactoryBean">
      <property name="persistenceUnitName" value="myPersistenceUnit"/>
   </bean>

</beans>

Switching to a JNDI EntityManagerFactory (for example in JTA environments), is just a matter of changing the XML configuration:

<beans>

    <jndi:lookup id="entityManagerFactory" jndi-name="jpa/myPersistenceUnit"/>

</beans>

12.6.1.2. LocalContainerEntityManagerFactoryBean

The LocalContainerEntityManagerFactoryBean gives full control over the JPA EntityManagerFactory and is appropriate for environments where fine customizations are required. The LocalContainerEntityManagerFactoryBean will create a PersistenceUnitInfo based on the 'persistence.xml' file, the supplied dataSourceLookup strategy and the loadTimeWeaver. It is thus possible to work with custom datasources outside of JNDI and control the weaving process.

<beans>
        
 <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
  <property name="dataSource" ref="someDataSource"/>
  <property name="loadTimeWeaver">
    <bean class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver"/>
  </property>
 </bean>
 
</beans>

The LoadTimeWeaver interface is a Spring-provided class that allows JPA ClassTransformer instances to be plugged in a specific manner depending on the environment (web container/application server). Hooking ClassTransformers through a JDK 5.0 agent is typically not efficient - the agents work against the entire virtual machine and inspect every class that is loaded - something that is typically undesirable in a production server enviroment.

Spring provides a number of LoadTimeWeaver implementations for various environments, allowing ClassTransformer instances to be applied only per classloader and not per VM.

12.6.1.2.1. Tomcat setup

Jakarta Tomcat's default classloader does not support class transformation but allows custom classloaders to be used. Spring offers the TomcatInstrumentableClassLoader (inside the org.springframework.instrument.classloading.tomcat package) which extends the Tomcat classloader (WebappClassLoader) and allows JPA ClassTransformer instances to 'enhance' all classes loaded by it. In short, JPA transformers will be applied only inside a specific web application (which uses the TomcatInstrumentableClassLoader).

In order to use the custom classloader on:

  1. Copy spring-tomcat-weaver.jar into $CATALINA_HOME/server/lib (where $CATALINA_HOME represents the root of the Tomcat installation).

  2. Instruct Tomcat to use the custom classloader (instead of the default one) by editing the web application context file:

    <Context path="/myWebApp" docBase="/my/webApp/location">
       <Loader loaderClass="org.springframework.instrument.classloading.tomcat.TomcatInstrumentableClassLoader" />
    </Context>

    Tomcat 5.0.x and 5.5.x series support several context locations: server configuration file ($CATALINA_HOME/conf/server.xml), the default context configuration ($CATALINA_HOME/conf/context.xml) that affects all deployed web applications and per-webapp configurations, deployed on the server ($CATALINA_HOME/conf/[enginename]/[hostname]/my-webapp-context.xml) side or along with the webapp (your-webapp.war/META-INF/context.xml). For efficiency, inside the web-app configuration style is recommended since only applications which use JPA will use the custom classloader. See the Tomcat 5.x documentation for more details about available context locations.

    Note that versions prior to 5.5.20 contained a bug in the XML configuration parsing preventing usage of Loader tag inside server.xml (no matter if a classloader is specified or not (be it the official or a custom one). See Tomcat's bugzilla for more details.

    If you are using Tomcat 5.5.20+ you can set useSystemClassLoaderAsParent to false to fix the problem:

    <Context path="/myWebApp" docBase="/my/webApp/location">
       <Loader loaderClass="org.springframework.instrument.classloading.tomcat.TomcatInstrumentableClassLoader" 
                              useSystemClassLoaderAsParent="false"/>
    </Context>

    In Tomcat 4.x, one can use the same contex.xml and place it under $CATALINA_HOME/webapps or modify $CATALINA_HOME/conf/server.xml to use the custom classloader by default. See the Tomcat 4.x documentation for more information.

  1. Copy spring-tomcat-weaver.jar into $CATALINA_HOME/lib (where $CATALINA_HOME represents the root of the Tomcat installation).

  2. Instruct Tomcat to use the custom classloader (instead of the default one) by editing the web application context file:

    <Context path="/myWebApp" docBase="/my/webApp/location">
       <Loader loaderClass="org.springframework.instrument.classloading.tomcat.TomcatInstrumentableClassLoader"/>
    </Context>

    Tomcat 6.0.x (similar to 5.0.x/5.5.x) series support several context locations: server configuration file ($CATALINA_HOME/conf/server.xml), the default context configuration ($CATALINA_HOME/conf/context.xml) that affects all deployed web applications and per-webapp configurations, deployed on the server ($CATALINA_HOME/conf/[enginename]/[hostname]/my-webapp-context.xml) side or along with the webapp (your-webapp.war/META-INF/context.xml). For efficiency, inside the web-app configuration style is recommended since only applications which use JPA will use the custom classloader. See the Tomcat 5.x documentation for more details about available context locations.

  • Tomcat 4.x/5.x/5.5.x

  • Tomcat 6.0.x

The last step required on all Tomcat versions, is to use the appropriate the LoadTimeWeaver when configuring LocalContainerEntityManagerFactoryBean:

<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
  <property name="loadTimeWeaver">
    <bean class="org.springframework.instrument.classloading.ReflectiveLoadTimeWeaver"/>
  </property>
</bean>

Using this technique, JPA applications relying on instrumentation, can run in Tomcat without the need of an agent. This is important especially when hosting applications which rely on different JPA implementations since the JPA transformers are applied only at classloader level and thus, are isolated from each other.

[Note]Note

If TopLink is being used a JPA provider under Tomcat, please place the toplink-essentials jar under $CATALINA_HOME/shared/lib folder instead of your war.

12.6.1.2.2. OC4J setup (10.1.3.1+)

As Oracle's OC4J classloader has native bytecode transformation support, switching from an JDK agent to a LoadTimeWeaver can be done just through the application Spring configuration:

<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
  <property name="loadTimeWeaver">
    <bean class="org.springframework.instrument.classloading.oc4j.OC4JLoadTimeWeaver"/>
  </property>
</bean>
12.6.1.2.3. GlassFish

GlassFish application server provides out of the box, an instrumentation cable classloader